Scripted GUI Notes

From Armagetron
Revision as of 03:05, 22 October 2005 by Lucifer (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Lucifer sez

Ok, assumptions. The base assumption is that there will be an event system. The reason I make this assumption is because we already have a callback system (don't know how much it does, though), and an event system is just a layer of centralized control on top of a callback system, so we can theoretically get there quickly.

To build a scripted GUI, we need event classes and widget classes that can be called from C++ (obviously). We need all the regular graphical primitives implemented in C++, and will probably accumulate more complicated graphics routines in C++ that are available to widgets to draw. I could build a python-based setup where you "inject" input just like what the CEGUI looks to do. As a matter of fact, the eventManager class I wrote for Acme supports that idiom directly. The widgets I wrote for Acme also could be revised to use graphical primitives exposed by arma, so I could port it to arma as a start, but it'd be ideal if the base widget classes were all implemented in C++ and the event manager itself as well.

The advantages are that we'd have a good plugin gui system (I've written two plugin managers in python, I could take either one or write a third), we'd have what is theoretically an easier gui to maintain. The disadvantage is that the gui requires python, and I know of at least one person around here who wants to be able to build the game without python. If we do it that way, then we need a certain portion of the base gui library written in C++ anyway.

In the event system, we need a regular range of input events and custom events posted by widgets. We also need events posted by the game itself, and the game needs some way to distinguish widgets. I'd like to make it so that the HUD is comprised of regular widgets that receive events to control the data displayed and possibly send events based on user input.

We could rewrite all the necessary parts of input to use events internally for all input and eliminate polling to some extent. (We could either poll directly and create our events, or do what I did in Acme, which is pull SDL's event queue and create my own events from those) The tradeoff is latency, of course. An event in a queue that says "turn left" just means that at some point recently the player said "turn left", and there is a small amount of time from when the input is received to when the event is processed. I should point out that GLTron uses an event system internally. Smile (maybe we can lift some of his code for it, but I suspect it would be better to just use my eventManager class as a model) That would simplify everything and we'd just make the grid display another widget.

Keeping the grid display out of the widget hierarchy would have a set of complications, but would provide some beneft in return, mostly performance.

Now, it would be reasonable to implement the gui in python and then someone (who wants to build without python) could implement the core parts of the widget set in C++. IMO, of course.

So what it looks like is this:

In your input polling routine you take each input found and create an event that describes the input and post it to the event queue.

In your main program loop, you do two things. One, you tell the eventManager to process the queue. Then you iterate through all objects and call update(timestep) on each one. In acme, 'iterate through all objects' means updating the GUI, but in arma it might be a separate step.

The event manager stores all the callbacks in one place. To make an "event handler", you tell the event manager what source of events you're watching for, what type of event you want, and the function/method that will be called. So when an event is generated from that source and of the type you want, your callback will be called with the event as its argument. Anything more complicated than that should only be implemented when needed. Wink (my eventManager class is only like 50 lines or so, if that)

It's not hard, we've already got a lot of it in place. We've also already got the menu primitives and font primitives needed, we just need lines, boxes, circles, and maybe a few miscellaneous drawing primitives.

Here's the last bit. If we don't write the drawing primitives in arma for the scripted widget set, then we have to introduce another dependency, in this case it would be pyOpenGL. And instead of factoring and centralizing the rendering engine we'd be spreading it out further.

After writing all this I went back and reread z-man's question. Yes, there are several GUI libraries for python, and even some for pygame. Every one of them introduces another dependency or three (wxPython being both the worst and best about it). The ones for pygame are all relatively primitive, and the ones I looked at were either unmaintained, hard to stick in a new project (read: would take hours of reading code before you could write the first line of your game), or both. I made the choice to write a new widget set for those reasons. Smile Mine may not be fundamentally better for other programmers than any of the others, but it's fundamentally better for me because I know it in and out already.

In any case, I didn't find any widget set for python/pygame that I think would be easy to fit into arma. I think it would be easier to fit in a C++ widget set (and this CEGUI looks the easiest to cram in there, if it builds). But I think it would be easiest/best to go ahead and implement scripting in arma first because a scripted GUI will drive a great deal of internal scripting support in the game and the long-term benefits far outweigh the short-term work (except in the case of "this works now" vs "this is planned but nobody has done it yet").