June 30, 2009

The Ultimate Showdown

So this is it - the final make-or-break stretch of my most demanding term at the University of Waterloo. Over the next 28 days, I will either successfully complete Real-Time and Graphics concurrently or consign myself to a pseudorandom location within the Bermuda Triangle of exhaustion, insanity, and despair trying. In typical fashion, I've done some preliminary number crunching: assuming roughly 8 hours of productive time per day - including weekends! - I have 224 hours in which to complete four Theory of Computation assignments, an essay about Church's approach to the Entscheidungsproblem, a raytracer, two more Real-Time train control milestones, one last midterm...and both my Real-Time and Graphics projects. I believe that this feat is tractable, albeit barely so - but time will be the judge of how well my wetware handles NP-complete scheduling problems!

The clock is ticking.

June 29, 2009

Following Procedure

What does three hours of work get you? A Mersenne Twister, the above image, and the revelation that, according to this book, Perlin is blissfully unaware of the finer points of Knuth shuffling. (This should not be construed as an attack on the rest of Perlin's work, which is responsible for much general awesomeness in subsequent cinematography.)

Graphics geeks will recognize this immediately; for the other 99.99% of humanity, it's an example of Perlin noise. This is the first intelligible thing to come out of my graphics term project, in which I plan to explore the exciting world of procedural generation. CG artists will commonly blend several instances of Perlin noise at different frequencies (16 pixels for my test image) to generate more complex textures. These textures are then applied to objects in the scene.

Why did I bother implementing a more complicated random number generator? Let me count the reasons:
  • The default implementation uses the full 32-bit integer range and provides an incredibly long period.
  • I now know exactly how my RNG works. (To be fair, I'm a bit fuzzy on the details of all those bit-shifts...but it feels good to roll your own!)
  • Python uses it since Python 2.3, which is a ringing endorsement in my books. (I'm told it has become something of a defacto standard.)
The bottom line is this: I'm going to be generating a lot of random numbers in the course of my project, so I might as well get a good (but still algorithmic - no /dev/urandom reads!) source of them.

Next up: I'll probably tackle Voronoi cell textures and attempt to actually texture-map a procedural texture in OpenGL. I'll also be continuing to post raytracer updates here, so keep posted! For information on the ever-changing state of my team's Real-Time Programming project, see the PsychOS blog.

June 22, 2009

Shady Business

Here's the same picture as before, with one important difference: the spheres look, well, spherical. Between dancing under the stars (and early-morning fog!) and class, I've somehow managed to find both the time and requisite sanity to implement Phong shading. Given libraries for vector operations, this is a relatively trivial task; nevertheless, it adds a whole new dimension (yeah, I couldn't resist) to rendered images. I'm also computing shadow rays to get the nice (albeit somewhat pixelated) shadows on the occluded parts of spheres. Next up: box and mesh intersections, supersampling, and hierarchical rendering. I'll keep posting progress images as I go along.

June 20, 2009

Here's Shooting a Ray at You, Kid

(Yes, I finally saw Casablanca a couple of weeks ago.) Exhibit A: the first meaningful image produced by my raytracer for CS 488 Assignment 4. It's a binary intersection image; it shoots a single ray from the eye through each pixel, rendering it white iff the ray intersects an object. I'll tackle Phong lighting next. For those outside the Graphics/CS bubble, Phong lighting is a relatively crude but efficient way to model the way light interacts with objects. As you can see from the Wikipedia page, it allows us to shade surfaces, thereby giving the impression of depth.

In general, raytracing is an attempt to model the way vision works. Production-quality raytracers will model reflection, refraction, transparency, scattered reflection from rough surfaces, and any number of other real-world phenomena to impart as much realism as possible to the final image. Maybe I'm strange, but I think that's cool - thanks to CS 488, I now have an appreciation for exactly how much programmer effort and CPU time go into, say, Pixar's rendering pipeline. (6-90 CPU-hours per frame, according to their site!)

One last note: although the raytracer project is by no means large, it's hefty enough that ad-hoc cp -r source control won't cut it. To that end, I've decided to give Git a spin. First impressions are positive: it's fast in all the ways that Subversion isn't, and it's ridiculously easy to set up over SSH.

June 17, 2009

On the Right Trackball

And here is the completed spherical Trogdor (of uniform density?) You might notice a circle drawn across his beautiful Phong-shaded polygons; that's part of a virtual trackball. Roughly speaking, this allows you to rotate the model as though the scene were contained in a sphere. (Also: that site uses a rather inefficient way to get the angle between the two projected vectors - can you think of a fast approximation?) Other user-interface goodies: you can select individual joints and rotate them, causing Trogdor to coil up or flex his shapely back-arm. You can also move Trogdor around.
Just remember: if you hear from me only sporadically this term, it's because I'm doing super-fantastic-awesome things like modelling Trogdor and driving model trains. All in the pursuit of higher education!

June 15, 2009

Burninate the Graphics Lab

It's CS 488 Assignment 3 time, which means I get to play around with hierarchical modelling - and what better way to do so than to construct the very likeness of Trogdor? (Yeah, it's a stretch. You try modelling anything with only transformed spheres.) BURNINATE!!!!!

June 4, 2009

Serial Experiments Lisp

I was watching Serial Experiments Lain when I noticed a certain LISP keyword scroll across the screen. A few posters have uploaded the still frame. Reminds me of the nmap cameo.

June 1, 2009

Data Abort

Found an interesting article, from which I'll pull two excerpts for comparison:

"White House officials now want to make government data sets available for citizens to use however they see fit."

"The problem is figuring out how to organize and display the data in a useful and informative way, instead of forcing people to sift through heaps of mind-numbing spreadsheets."

The first is a visionary statement. It amounts to crowdsourcing data analysis, something that (if applied correctly) could rescue our governments from the technophobic morass they have so willingly plunged into. At the same time, it would provide a spectacular resource for future machine learning research.

The second, if taken at face value, is facepalm-worthy. Why? If you want a gesture like this to be effective, you have to supply the raw data. Standard graphs and charts aren't enough; let us decide how we want to visualize your data. Let us rip your datasets apart with state-of-the-art statistical analyses and classification algorithms. Better yet - allow us to upload our homebrew visualizations, hold an online voting process, and host the best examples.

Imagine this simple gesture taken to its logical conclusion: complete data transparency of government actions. There would be no room for nepotism, pork-barrel spending, and other forms of shady backroom politics. We would finally have the power to inspect the inner workings of our government, much as our intelligence agencies now monitor us. After all, it is extremely improbable that the likes of CSIS and NSA will give up the incredible power offered by telecommunications, much as it is laughable to expect the world's nuclear powers to spontaneously and permanently renounce their missile stocks; the technology is there, the knowledge is there, and nothing short of the complete destruction of mankind will change that. The best we can do is to level the playing field.

This sort of talk immediately raises national security concerns. Should it? What if every citizen had the ability to assess national security threats, much as every Wikipedia user has the ability to stop malicious edits in their tracks? Which model, in the end, is more robust - the cathedral of centralized government, or the bazaar of direct democracy?

Enough ranting from me; I've got some projective geometry to tackle.