|
Silas Silva, 03.09.2010 03:55:
Over the years, I've done some GUI applications in several ways: using Tk, Qt, HTML forms, ncurses etc. But never found a uniform way to specify a GUI and generate the resulting code for a language/toolkit combination.
What does "the resulting code" mean to you? Just what you need to represent the GUI itself, or also the code that fills it with functionality? I mean, most GUI toolkits have some half-abstract way of matching GUI events with target code, but if you want to go further than that (i.e. *into* the target code), I doubt that it will be very helpful.
This program will read a file (whose content will be a description of a GUI in a DSL (domain-specific language) that I intend to specify soon) and generate the code. So, basically, some would call the program in that way: program --generate qt-xml< file.dsl> output.ui program --generate html< file.dsl> output.html The DSL should be simple (similar to Tcl/Tk code): button foo -text "Foo" -x 10 -y 20 label ...
If I'm not completely mistaken, I read about at least a couple of usable GUI abstractions in the Python world. But you seem to want to abstract from the language binding as well.
Wouldn't it make more sense to use a graphical DSL instead of a textual one? For example, you could use the Qt designer to model your GUI, and then generate the platform independent GUI representation from that. Or use an HTML editor and map the HTML/CSS output to Qt/GTK+/whatever. Few people write user interfaces manually these days.
4. Am I reinventing the wheel?
I think it's more like trying to invent a square wheel where an oval one already exists.
Stefan