There is one single language understood by every culture that is doing scientific work: math. As far as I'm aware, there is no alternate character set that one needs to translate to or from if you want to express change, vectors or some other mathematical concept with someone from another culture.
In the same way that the language of math is universally understood in every corner of the world, is standard ASCII the universally accepted character set for source code and primitive English used for keywords? If that is more true than less true, then it might be reasonable to argue that it is better to limit source code to ASCII instead of innovating to allow a broader range of characters, outside of string literals, unless there is a clear and convincing reason to break this norm (if it is a norm).
What problem does it solve? Is support for UTF-8 useful for automated script processing or some sort of DSL application?
-Andrew