Showing posts with label graphics. Show all posts
Showing posts with label graphics. Show all posts

July 26, 2009

The Eleventh Hour


Aside from being the title of an awesome children's book (which may or may not still reside on my shelves), this phrase adequately sums up my situation here. I've got slightly under 72 hours to polish and document my Graphics project, touch up an essay on Church's solution to the Entscheidungsproblem of Hilbert's Program, and - of course! - push the trains struggle as far as possible.

(Yes, I know you can see the edges of the skybox in the above image. I probably won't have time to change that before the Graphics demo.)

On the real-time front, we've had a very fortunate breakthrough; the sensor polling bug fell before a quick debugging sprint yesterday morning. We're driving hard to get route-finding working tonight, as this will leave us all of Tuesday (as mentioned, I have other things to worry about tomorrow!) to put together our final demo. Given the time constraints, we'll likely have to go with something almost braindead-simple; the plan currently being floated around is to hack together a cat-and-mouse game using the trains.

But enough about schoolwork! I'm off to Germany on the 30th. Let's hope I manage to drag my bedraggled body to Pearson in time!

Final note: this blog has been very one-dimensional over the last month. Given my situation, this is perhaps understandable; nevertheless, I shiver with antici-

Delay(60);    // pause for 3s (one tick == 50 ms)

-pation at the prospect of having something other than Graphics or Real-Time to talk about.

July 23, 2009

Ore for Wood


5 days to go - I've missed a few here. Graphics is going reasonably well; the past day or so has seen the addition of skyboxes, proper terrain clamping, texture blending on the terrain (albeit a hacked-together software version), smooth normals, water reflection, and a few test scenes to my project. Above: a test scene that shows the use of the skybox (rendered in garish colours for maximum obviousness!), one tree (with eight levels of branching), and a few randomly positioned rocks (two levels of subdivision each).

Real-Time, on the other hand, is coming to an inauspicious close. Our program suffers from a showstopper bug: sensor polling freezes. We managed to hack around this to complete most of the first train control objective, but the second control objective will remain elusive until this is resolved. The PsychOS team is running out of time rapidly, but we still have a few more tricks up our sleeves:
  • We can redesign sensor polling. The current plan is to extend the kernel's event-handling structures to support software events as well; this way, the SensorManager() can trigger EVENT_NEW_SENSOR (or something like that) whenever a new sensor comes along.
  • We can attempt adding a chain of couriers (or similar data-passing tasks) to the train input server. If the problem lies in dropped bytes, this might resolve the issue.
  • Failing all else, we can add some NOPs (or the C equivalent: a busy-wait for loop - our TA actually did this when he took the course.) The mere thought of adding such a blatant hack induces violent fits of cringing, but we'll do it if we need to.
If we can get past this, we already have systems in place to find routes and reserve track sections - they just need to be tested and used.

Personal notes: it is now quite obvious that the month is drawing to a close. This has been a bizarre ride of ups and downs, of hope and despair, of sleeplessness and take-out meals and 16-hour debugging sprints. I'm feeling strangely calm as I enter the final stretch, in stark contrast to the near-complete-breakdowns that some of those Real-Time sprints drove me to. Not even the possibility of missing the final Real-Time demo fazes me; regardless of what happens with the project, I've learned an incredible amount about hardware, low-level programming, and operating system design - and it's all practical knowledge, especially given that we've been working with the ARM architecture.

If I had to pinpoint the source of our problems, I'd put it down to hubris. The early assignments were relatively easy, the test programs rather trivial; we simply wrote the required code and then went off to take care of our other assignments, working on the assumption that everything would continue to go well. This is never good practice, not even in the classroom. We implemented proper error-checking about three weeks ago, something we should have done three months ago. We should have rigorously tested our kernel, bombarding it with interrupts and I/O and requests and everything else that could possibly bring it to its knees; we did some of that, but it was too little and far too late.

Despite the above tirade - which looks suspiciously like a pre-emptive post-mortem - I should add that I refuse to give up. Once I get Graphics out of the way, we'll dig our heels in this weekend to try anything and everything that might work. Keep posted!

July 18, 2009

Trees!


10 days to go. After plowing through the first few sections of Prusinkiewicz and Lindenmayer's The Algorithmic Beauty of Plants, I hacked together a quick tree renderer. The above image was generated by randomizing the parameters in Figure 2.8, with slightly higher branching angles to fake downward tropism. (Yeah, I know that's not how it's really done - but it's dead simple.)

At this point, I need to start trying to mash things together. I could spend several days tweaking my texture, terrain, rock, and plant generators to near-perfection, but I just don't have the time - especially when I've also got to slap in texture mapping, a skybox background, a simple water plane with stencil buffer reflection, interaction (I've got the keyboard working, but I haven't implemented mouselook yet), terrain clamping, and collision detection.

July 15, 2009

Catmulligatawny


13 days to go (I missed one in my daily reporting - blasphemous!) Above: a quick mockup of Catmull-Clark subdivision surfaces on a cube. As you can see, the face normals are still slightly off; nevertheless, the general technique appears to be mostly working (although, as promised, the quads produced are predominantly non-coplanar.)

Why would I bother doing this? Simple - I'm going to make rocks! I'll start with a "random" cube formed by taking a point from each octant. Toss in a couple of Catmull-Clark iterations, perturb the face and edge vertices, and hopefully the end result will be vaguely boulder-esque. I'm also planning to implement stochastic OL-systems for trees and plants. (Trust me: although these sound impressive, they're actually quite easy to piece together.)

On the Real-Time front: we've finally discarded enough hubris to implement proper fail-fast error checking. We now panic on every buffer overflow and system call error with a very informative message detailing as much of the system state as possible. This approach - something that we should have implemented long ago! - has already located several serious issues. For each one we fix, two more pop up; nevertheless, not all hope is lost. If we can excise enough of these pesky critters by, say, four days before the project deadline, I still think we can cobble together a working demo.

Personal front: I've been aiming for a renewed adherence to my previous plan of proper sleep, decent nutrition, and exercise over the past three days. The result? My stress levels have plummeted, my mood has improved measurably, and I once more regard my projects as fascinating challenges rather than onerous obstacles.

July 11, 2009

Joseph and the Amazing Technicolor Terrain Patch

 
Now in colour! With semi-decent lighting! The random white patches are triangles pointed directly at the sun (or directional light, whichever you prefer); for this test render, I'm only calculating face normals and calling glNormal3d() once before each triangle. One way to get vertex normals is to average over the neighbouring face normals.

Topographically Speaking

 
17 days to go. As promised, I've hacked together heightmap functionality; this one is based off the diamond-square algorithm. Next up: lighting, texturing, keyboard and mouse interaction, and some kind of sky (I'll probably go with skybox for that one.) Real-Time is coming along, albeit not as quickly as I had hoped. We're now working out an issue with the sensor modules; either the modules don't always send the right number of bytes, or we're dropping bytes somewhere in our serial drivers. Either way, we've got to figure this out and reliably track the movements of a single train to within a few centimetres by Monday.

July 8, 2009

Voronoi the Paranoid Android


Above: a cellular-based texture, Worley-style. Still implementing cool textures for my procedural content generation project; so far, I've got a couple of noise basis functions along with a framework for combining them into more complex textures. Next up: terrain generation. This will involve a few steps:
  • Generate a heightmap. For a first pass rendering, midpoint displacement is dead easy to implement. Since heightmaps are textures, I can even slap this into my texture framework as another basis function (albeit one with a relatively heavy initialization time (although the midpoint values could be generated on demand!))
  • Set up the OpenGL window. We've been using gtkmm for previous assignments; I see no reason to break that trend, as I can save time by slapping our old window setup code into the project.
  • Issue OpenGL commands to render the heightmap. In its most basic form, this is just a series of GL_QUADS. If I was going after static rendering, I'd probably put the whole thing into a display list and use GL_QUAD_STRIPS instead; however, this will be interactive, so I'll need to do something smarter than shoving the whole terrain patch into a display list.
There are other considerations as well. For example, I might want to dynamically load blocks of terrain as the user walks around. Randomly generated terrain becomes a bit of a problem in this case - unless the blocks are "aware" of each other, there will be discontinuous jumps! I'll leave those problems for later.

15 Minutes of Fame



20 days to go, and it is with the utmost pride that I post the above image. What is it? Look closely in the upper-left corner and you'll see what looks like a timer reading just over 15 minutes.

That's right - yesterday around 9:30 pm, after four days of debugging, we posted this commit to the SVN repo. While we fixed several other glaring issues with our context switch during this marathon of frustration, this was the one to finally extirpate a nasty Heisenbug that all but stopped development on our Real-Time project. The above image is proof that, with this bug removed, our system can now run for 15 minutes without crashing - the same 15 minutes that will be required of us during our final project demonstrations. More importantly, this allows us to continue on with more interesting aspects - like figuring out how to reliably track the location of a train in the face of highly fallible hardware and malicious switch-flipping sensor-triggering TAs.

For me, this renews the confidence that I had called into question not four days ago. It's hard to stress the importance of this enough when you're up against a formidable challenge - and the combination of Graphics and Real-Time is most certainly that.

July 6, 2009

Catch-22

22 days to go, and I'm putting the finishing touches on my raytracer. Above: my sample image, which features implicit surfaces and adaptive anti-aliasing. (Technical details? Bounded Newton-Raphson iterations, gradients for normal vectors. Simple. I'm told regula-falsi is preferred; if I had another day, I'd pop that in there. As for the adaptive anti-aliasing, I'm applying a Sobel operator to the luminance values from the first pass and randomly supersampling pixels above a certain threshold.) Unfortunately, the images seem to suffer from a good deal of noise; in raytracing, this is a sign of numerical instability. If I was pursuing a raytracer-based final project, I would investigate further and fix it, probably along these lines of attack: 
  • The problem worsens with distance from the camera. This might be fixed by applying a projective transformation, but that would FUBAR angles for shadows and reflection.
  • Some of the mesh models have wonky normals; it might be worth the time to recalculate them to the outside.
  • I could probably eliminate a few spurious divisions and normalizations.
That said, I don't have time, so noisy images it is. Let's hope my final project is free of such trite errors!

July 4, 2009

Score and Four To Go



That's right - 24 days left. Current status:

Raytracer's almost finished. I just have to put a flag in to render bounding boxes, speed things up a little, and render a custom scene (unlike the one at top, which was provided by the TAs as a test.) I'll also put a bit more effort into stamping out numerical instability - especially for ray-polygon intersection - and I'll perform random supersampling to smooth things out a bit. Once I get that done, I can finally get back to my end-of-term project!

We've hit a snag in Real-Time land. System call parameters occasionally get corrupted, and it's somehow related to timer interrupts. (For full details, see here.) So far, the bug has proven itself to be highly resistant to our debugging efforts. Not all is lost, however; I'm planning to branch the repo, pare down the system to only those parts necessary to reproduce it, and tweak around until this thing is fixed. While it might cost us some short-term assignment marks, we still have time to rewrite the thing from scratch - and I'm fully prepared to do so in the absence of effective alternatives. (It's worth noting that at least one other group has followed this precipitous path!)

On a more personal note, this is the most demanding sustained workload I've ever faced. Until this month is over, weekends and holidays mean nothing to me. I'm holding up so far; the Real-Time bug brought me close to the breaking point, but I've since regained my self-confidence. The bottom line is this: I enjoy what I do. I like the challenge of it, the reward of writing something abstract to get a very concrete result. If I didn't, I would have ditched CS long ago for less silicon-encrusted pastures. If I have to remind myself of that when I'm chugging away another 10-hour stint in the Trains Lab, so be it. I'll make it through these 24 days one way or another - Evan will prevail!

That said, I'm always open to receiving words of encouragement, advice, or anything else positive.

June 29, 2009

Following Procedure


What does three hours of work get you? A Mersenne Twister, the above image, and the revelation that, according to this book, Perlin is blissfully unaware of the finer points of Knuth shuffling. (This should not be construed as an attack on the rest of Perlin's work, which is responsible for much general awesomeness in subsequent cinematography.)

Graphics geeks will recognize this immediately; for the other 99.99% of humanity, it's an example of Perlin noise. This is the first intelligible thing to come out of my graphics term project, in which I plan to explore the exciting world of procedural generation. CG artists will commonly blend several instances of Perlin noise at different frequencies (16 pixels for my test image) to generate more complex textures. These textures are then applied to objects in the scene.

Why did I bother implementing a more complicated random number generator? Let me count the reasons:
  • The default implementation uses the full 32-bit integer range and provides an incredibly long period.
  • I now know exactly how my RNG works. (To be fair, I'm a bit fuzzy on the details of all those bit-shifts...but it feels good to roll your own!)
  • Python uses it since Python 2.3, which is a ringing endorsement in my books. (I'm told it has become something of a defacto standard.)
The bottom line is this: I'm going to be generating a lot of random numbers in the course of my project, so I might as well get a good (but still algorithmic - no /dev/urandom reads!) source of them.

Next up: I'll probably tackle Voronoi cell textures and attempt to actually texture-map a procedural texture in OpenGL. I'll also be continuing to post raytracer updates here, so keep posted! For information on the ever-changing state of my team's Real-Time Programming project, see the PsychOS blog.

June 22, 2009

Shady Business

 
Here's the same picture as before, with one important difference: the spheres look, well, spherical. Between dancing under the stars (and early-morning fog!) and class, I've somehow managed to find both the time and requisite sanity to implement Phong shading. Given libraries for vector operations, this is a relatively trivial task; nevertheless, it adds a whole new dimension (yeah, I couldn't resist) to rendered images. I'm also computing shadow rays to get the nice (albeit somewhat pixelated) shadows on the occluded parts of spheres. Next up: box and mesh intersections, supersampling, and hierarchical rendering. I'll keep posting progress images as I go along.

June 20, 2009

Here's Shooting a Ray at You, Kid

(Yes, I finally saw Casablanca a couple of weeks ago.) Exhibit A: the first meaningful image produced by my raytracer for CS 488 Assignment 4. It's a binary intersection image; it shoots a single ray from the eye through each pixel, rendering it white iff the ray intersects an object. I'll tackle Phong lighting next. For those outside the Graphics/CS bubble, Phong lighting is a relatively crude but efficient way to model the way light interacts with objects. As you can see from the Wikipedia page, it allows us to shade surfaces, thereby giving the impression of depth.

In general, raytracing is an attempt to model the way vision works. Production-quality raytracers will model reflection, refraction, transparency, scattered reflection from rough surfaces, and any number of other real-world phenomena to impart as much realism as possible to the final image. Maybe I'm strange, but I think that's cool - thanks to CS 488, I now have an appreciation for exactly how much programmer effort and CPU time go into, say, Pixar's rendering pipeline. (6-90 CPU-hours per frame, according to their site!)

One last note: although the raytracer project is by no means large, it's hefty enough that ad-hoc cp -r source control won't cut it. To that end, I've decided to give Git a spin. First impressions are positive: it's fast in all the ways that Subversion isn't, and it's ridiculously easy to set up over SSH.

June 17, 2009

On the Right Trackball

 
And here is the completed spherical Trogdor (of uniform density?) You might notice a circle drawn across his beautiful Phong-shaded polygons; that's part of a virtual trackball. Roughly speaking, this allows you to rotate the model as though the scene were contained in a sphere. (Also: that site uses a rather inefficient way to get the angle between the two projected vectors - can you think of a fast approximation?) Other user-interface goodies: you can select individual joints and rotate them, causing Trogdor to coil up or flex his shapely back-arm. You can also move Trogdor around.
Just remember: if you hear from me only sporadically this term, it's because I'm doing super-fantastic-awesome things like modelling Trogdor and driving model trains. All in the pursuit of higher education!

June 15, 2009

Burninate the Graphics Lab

 
It's CS 488 Assignment 3 time, which means I get to play around with hierarchical modelling - and what better way to do so than to construct the very likeness of Trogdor? (Yeah, it's a stretch. You try modelling anything with only transformed spheres.) BURNINATE!!!!!