Saturday, September 29, 2007

Gallium3D, Shaders and LLVM

Today we're going to talk about shaders. Well, I'll talk, or to be more specific write, or to be blunt I'll pretend like I'm actually capable of putting my thoughts into readable excerpts that other human beings (hopefully you) and some of my imaginary friends (they're not all winners) can understand.

The question I've been asked a few times during the last week was "who are you and what are you doing in the bushes outside my house", which isn't related to computer graphics at all and what I do in my spare time is none of your business so I won't be talking about that. Now the other question that I've heard a few times during the last week was "will Gallium3D use LLVM?", the short answer is "yes, it will".

First of all a little about graphics hardware. A common thing to do in modern graphics hardware is to have very wide registers and allow stuffing arbitrary vectors inside those registers. For example one register might very well store 8 2 component vectors. Or 16 components of 16 different vectors with other components being stored in subsequent registers. To support writing to those wide registers, usually there's another register, often a stack of them, which is used as a write mask for all operations. Cool, eh? So now when your language supports, god forbid, branches or loops and you want to code generate something for graphics hardware, you're left with two options. Option one is to give up and go ride donkeys in a circus and option two which is to do something crazy to make it work. To be honest I've never even seen a real life donkey. I've seen a cow but we just didn't hit it off. So I knew that option one is just not right for me.

So one of the big worries that we had was whether we'll be able to code generate from LLVM for graphics hardware. After some discussions about pattern matching in code generators and opcode lowering it finally looks like the answer is "yes, we will be able to generate something usable". So the way it will work in Gallium3D is largely similar to the I wanted to do it in the LLVM GLSL code that Roberto and I have been working on for Mesa a few months back. The difference is that the IR in Gallium3D is completely language agnostic.

You can run OpenGL examples already, granted that some of them will not produce correct results,but if it all would just work then I'd have nothing to blog about. I'll start integrating LLVM parts within the next two weeks which is when the performance should get a major boost and flowers should bloom everywhere. You might think that the latter is not, technically, related to our work on Gallium3D and the fact that Autumn is here makes that last statement even more dubious, but you're wrong. Who would you rather trust, you or me? I bet you thought "me" and so I rest my case.


And all of that is brought to you without any sheep sacrifice and hardly any virgin sacrifice ("hardly any" because I, as a representative virgin, am making a small sacrifice, but from what I understand it doesn't count as a full fledged "virgin sacrifice").
How do you like them apples? (or oranges... or strawberries... I like raspberries... They're all good is I guess my point).

Friday, September 21, 2007

Gallium3D

Critics are raving: "Gallium 3D is the best thing that ever happened to Free Software graphics", "It's breathtaking!", "Never before has nudity been so tasteful!"... Alright, maybe not the last one. Actually none of them, since it's a brand new project. In fact that's the point of this entry. To introduce you two.

You, a brilliant (as derived from the fact that you're reading this blog) Free Software enthusiast or simply my very own stalker (both options very satisfying to me personally). And Gallium3D, the foundation of Free Software graphics for years to come.

Gallium3D is a redesign of Mesa's device driver model. It's a new approach to the problem of accelerating graphics. Given tremendous investment that free desktops make in OpenGL nowadays I'm very excited to be working on it.

At Tungsten Graphics we've decided that we need a device driver model that would:
  • make drivers smaller and simpler
  • model modern graphics hardware
  • support multiple graphics API's
The basic model, as presented by Keith Whitwell on XDS2007, looks as follows:


You can follow the development of Gallium as it happens in Mesas gallium-0.1 branch.

Also you can read a detailed explanation of what it is on our wiki .

Now why should you be excited (besides the fact that, like I already pointed out, there's no developer nudity in it and that being excited about the stuff I'm excited about is in general a good idea).
  • Faster graphics
  • Better and more stable drivers
  • OpenGL 3
  • Ability to properly accelerate other graphics APIs through the same framework. Did someone say OpenVG?
This is a huge step on our road to tame the "accelerated graphics" demon in Free Software. We've been talking about it for a long time and now and we're finally doing it. There's something zen like about working on free software graphics for years and finally seeing all the pieces falling into place.

Monday, September 10, 2007

Git cheat sheet

Due to the fact that I've been moving I forgot to point out that about three weeks ago I created a small Git cheet sheet. Quoting my email to the Git mailing list: I took a short break from being insanely handsome (which takes a lot of my time - gorgeous doesn't just happen) and based on similar work for Mercurial created a little SVG cheat sheet for Git. I'm not sure if it's going to be useful for anyone else (the target audience was composed of engineers who agreed to move to and work from Norway so you know right of the bat that historically they already made some bad decisions), but the times when I do art are so rare that I feel the need to share.

The thing that I took from the Mercurial sheet, besides the idea, is the flow-chart (people dig icecream and flow-charts, the first one is really hard to get into a SVG rendering so I went with the second) so the license is the same as of the Mercurial sheet which was Creative Commons. There's likely a few errors in it and if you have any suggestions or if you sport latex pants and a fancy green hairdo that goes with those pants (which equals the fact that you're an artist) and would like to pimp the sheet out, it would be my pleasure to help you.




The SVG is at:
http://byte.kde.org/~zrusin/git/git-cheat-sheet.svg
Sample png's are here:
http://byte.kde.org/~zrusin/git/git-cheat-sheet-medium.png
http://byte.kde.org/~zrusin/git/git-cheat-sheet-large.png

I also got up to speed on all the latest announcements. I thought that the Novell's Spotlight collaboration announcement was disappointing. I'm referring to the "Microsoft will provide Novell the specifications for Silverlight".

Richard Leakey once said "We are human because our ancestors learned to share their food and their skills in an honored network of obligation". I love that quote because it so beautifuly describes what we, so heavly, rely on in the Open Source community. For a company to take from the great ocean of free knowledge, led by an open standard of SVG and end up with a closed specification is just disgusting. Seeing an Open Source company strike a deal to cooperate on that closed technology is just sad to me. I understand why they did it but understanding something doesn't make it morally right.

I really hope, pointlessly as it might be, that the work on the Silverlight specification and the specification itself will be open. You obviously thought that SVG isn't good enough for your purposes and you built on top the experiences and ideas taken from SVG. Let us improve SVG based on your experiences and ideas. Once we've done that, you'll be able to repeat that process again. That's the way it works and that's the way our society has always worked.

Despite what you might think, you don't own ideas, they belong to us all.

Thursday, September 06, 2007

Quasar

Does working on the 3D stack make me look fat? Give it to me straight. I promised myself I won't cry no matter what the answer is. As I mentioned in my last blog I'll be getting awfully kinky with our 3D stack and people wondered whether my work on it affects Quasar. Well, just as me moving didn't affect my habits with regards to taking showers (which I do frequently, i.e. every christmas or so), smoking (never ever), being awake (at least once per day) it also didn't affect my spare time habits (except the "starving" thing which was one of my favorite past-times in Norway). Since Quasar was always my spare time project, my move affected it as much as me buying new sneakers affects temperature in Japan (which it doesn't, assuming of course that the folks in Japan won't find the vision of me in the new sneakers so insanely hot to increase the temperature of the whole country).

Aaaanyway, so what is Quasar? Besides being a word that starts with a Q, has some other letters in the middle and sounds funky. It's a dynamic rendering framework, which in turn begs the question "what the hell is that?" ("the hell" being an optional part of this sentence).

Let me lead you through the evolution of Quasar and hopefully at end of this trip we'll be on the same page with regards to what we really need in terms of a graphical framework on the desktop.

Quasar started as a pretty basic image manipulation framework. I wanted to make sure that people can chain filters e.g. read-image->scale image->blur image->render image. It was a linear pipeline. The thing that made it important to me was that it was focused on hardware acceleration. This is a very important point which I want to underline.

I'm not interested in software based effects. This is not to say that the software versions of all of them shouldn't work, I just refuse to bother. It's 2007, even phones come out with some kind of gpu in them, worrying about software based graphical effects does to your time what listening to financial advises from blogs of Open Source engineers does to your money - wastes it (which reminds me, I also have some financial tips for you. Well, tip, just one. It's not the quantity but quality as they say. Invest money in winning lottery tickets. Note that I said "winning", buying any other kind would be silly. You're welcome). With the work we're doing right now at Tungsten Graphics our graphics stack will be extremely good at it.

When I was a kid we didn't have fancy graphics. We had crackheads, alcoholics, pimps, thugs and plenty of other shady characters none of which, I'm fairly certain, did computer graphics (childhood memories warm your heart, don't they?). If my years of working on a vector graphics framework taught me anything (besides how to write a vector graphics framework) it's that the most fundamental part of a high level graphics framework isn't the framework at all. It's the way in which you let people design their applications, their interfaces. What use are blurred perspective transformations if two people know how to use them? What's the point of 20 composition modes if people can't be bothered to understand the basic 2.

So I started seeing effects as a step, a very crucial one, on the way to very appealing interfaces, but not my ultimate goal. In a desktop world we're dealing with graphics on a fairly abstract level, because we're doing graphics for non-graphics people. Good effects framework is not one that has every possible effect ever invented but one that makes it possible to quickly add those effects to ones application/interface and end up with something usable and pretty.

Finally inherent complexity of graphics bothered me. Maybe not even the complexity itself (especially since it's where my job security stems from), but the fact that it takes a lot of knowledge to understand and know the fast paths. The knowledge necessary to extract that logic can be easily abstracted in a graph like structure. That's what I started doing in Quasar.

Quasar is a dynamic graph. One that can rearrange itself to produce the output in the fastest possible way.
Those are the three basic tenants of Quasar, to rehash, they are:
- hardware accelerated,
- easy to incorporate and produce good looking results,
- smart enough to produce the results in the quickest possible way, no matter how little knowledge its users posses.

The way it works is that one creates the rendering in a graphical builder (which at the moment looks very much like Quartz Composer), loads it up in application and, well, that's it.

Of course, the graph can be created or modified at run-time just well. It follows the "tell me what you'd like and I'll give you that in the best way possible" principle.
Having said that Quasar is still far from complete but I just posted where to get it from on the relevant KDE list (if you don't know which one, trust me you want to wait a little bit longer before trying it out). Oh, and of course Quasar will support full effects pipelines on top of movies (integrated with Phonon).