본문 바로가기

Multimedia

Dr Dave Technology Prescriptions

http://drdave.co.uk/blog/?page=7 - Dr Dave  Technology Prescriptions

 

HDR Effects - Technology Prescriptions

One of the things I needed to start thinking about sooner or later was supporting High Dynamic Range (HDR) render targets, initially for the following reasons:

  1. I wanted to use a realistic bloom effect for the sun, without affecting other rendered planetary bodies.
  2. Atmospheric scattering algorithms lead to a wide range of luminance values.
  3. Background stars span a wide range of luminance values.

The use of HDR in combination with Tone Mapping to normalise luminance values into a standard render target would allow me to deal with each of these issues. This post will focus on the just the first issue of rendering the sun.

Non-HDR Approach

There are a number of approaches to rendering bright objects. One example is the lens flare sample on the XNA Creators Club site, which uses occlusion queries on the GPU to render glow and flare sprites when the sun is visible. Figure 1 shows the use of this technique.

Occlusion Queries Sprites

Figure 1. Non-HDR Sun effect using Occlusion Queries with Glow and Flare Sprites

HDR Approach

Another approach to rendering bright objects is based on image post-processing. The bloom sample on the XNA Creators Club is a good model for implementing bloom post-processing effects, along with additional control over color saturation. I combined this approach with an HDR render target (I used SurfaceFormat.HalfVector4), and added some HDR-textured spheres to represent bright lights. I rendered my scene as per normal and, using a post-processing bright-pass filter, extracted pixels for which the color range fell outside of the normal (0-1) range. I then used several passes of a Gaussian blur filter to create "star" and "bloom" effects and combined this with the original image, as shown in Figure 2.

HDR Test

Figure 2. HDR Post-Processing Test. Note that the "lights" are not being used in any lighting calculation for the central "planet" (a simple ambient value is applied to its texture), since this test was simply to demonstrate post-processing effects.

I then applied this post-processing approach as an alternative method to rendering the sun, as shown in Figure 3 below.

HDR Post-Processing

Figure 3. Sun effect HDR Post-Processing effects

Combinations of both approaches can also be used, as per Figure 4 below.

HDR Post-Processing and Occlusion Queries Sprites

Figure 4. Sun effect using both HDR Post-Processing and Occlusion Query Sprites

Using HDR render targets is an expensive process, even for many current GPUs. However, it has a number of advantages over the Occlusion Query approach, such as:

  1. Rendering one or many HDR-textured items in a post-processing pixel shader has the same GPU cost, unlike running multiple occlusion queries.
  2. Since post-processing effects are pixel-based, this approach leads to more realistic results when HDR-textured items are partially occluded.

In Part 1 I showed some screenshots of a planet rendered using multiple effects. I'll discuss each of these in turn, and begin with a composite image to show how each effect contributes to the overall image.

Earth Shader Composite

Figure 1. Composite image showing per-pixel, single-direction lighting (top left), addition of bump-mapping and specular (Phong) reflection (top right), addition of atmospheric scattering (bottom left), and addition of clouds and cloud shadows (bottom right).

The first thing I needed to do was create a model. I could have used a predefined sphere, but chose instead to generate the mesh algorithmically so that I could easily control the number of vertices.

Texture

Once I had a model, the first thing I needed to do was to apply a texture. NASA has an extensive image library, and the Visible Earth site has a collection of land maps for Earth. These maps are Equidistant Cylindrical projections, so my texture coordinates were simply:

x = λ
y = θ

where

λ = longitude,
θ = latitude

Lighting

The Shader Series on the XNA Creators Club site is a great introduction to lighting. My initial lighting model was a simple per-pixel shader, with a single directional light source from the sun. I subsequently added specular reflection using a Phong shading algorithm.

Relief

In order to show surface features without significantly increasing the number of vertices in the planet model, bump (normal) mapping can be used. There are numerous sources of normal maps on the Internet, available in various formats (I'm using DDS), and a good sample of how to implement normal mapping in a shader can be found on the XNA Creators Club site.

Atmospheres

There are many discussions on the subject of atmospheric scattering, many of which reference the work by Nishita et al. The article "Accurate Atmospheric Scattering", GPU Gems 2, Chapter 16 by Sean O'Neil served as a good starting point and is available here.

Clouds

The Visible Earth site also has a collection of cloud maps. This texture is then rendered on another sphere model above the surface of the planet.

Shadows

It was an important effect to cast shadows from the clouds onto the surface, particularly toward the terminator where the shadows are longer and not directly below the clouds themselves. My first approach was to implement a sphere-ray intersection algorithm in a pixel shader to dertermine the surface position of a shadow cast from my cloud sphere, and subtract the result from the existing surface texture.

In order to render my planetary bodies, I had to consider the following effects:

  1. Texture
  2. Lighting
  3. Relief
  4. Atmospheres
  5. Clouds
  6. Shadows

I'll discuss each of these in later posts, but for now here are some screenshots of the first attempt at rendering a planetary body using these effects.

Earth Shader

Earth Shader

Earth Shader

Earth Shader

Earth Shader

Figures 1-5. Earth Shader

In order to render a realistic star background, I could either use an image texture (either mapped onto a sphere or a cube), or a set of dynamic points. I opted for the latter so that I could more easily support changing my field-of-view (i.e. "zooming-in") without losing detail on my background.

Numerous star catalogues are available in the public domain. I opted for the Hipparcos Catalog, which lists the positions of approximately 100,000 stars. I converted the catalog to XML and then used the Content Pipeline in XNA Game Studio 3.1 to compress the XML to an XNB file. The data can then be loaded at runtime simply by using:

BackgroundStar[] stars = Content.Load<BackgroundStar[]>("hipparcos");

BackgroundStar is a simple class containing information such as position, brightness, spectral type etc. for each star in the catalogue.

I was really surprised at the level of performance I got when rendering these items, initially as point primitives, and subsequently as point sprites. For the latter, I created myself a simple blurred sprite which I sized according to the brightness, and tinted according to the spectral type of the star. As an example, here's a screenshot of Orion taken with both a wide and narrow field-of-view.

Orion Wide-Angle

Figure 1. Orion Wide Angle

Orion Tele-Photo

Figure 2. Orion Tele-Photo

One of the issues here is that the apparent magnitude of a star is a logarithmic scale. This means that the faintest stars visible to the naked eye (around magnitude 7-8) are approximately five thousand times fainter than the brightest star in the sky (Sirius, magnitude -1.46). The Hipparcos Catalog lists stars down to around magnitude 14, so in order to render this range of magnitudes with only 255 levels of luminance I had to flatten the brightness curve.

To make things much easier, I wanted to render a grid to help with orientation. In planetary terms this would be a latitude/longitude grid, however the celestial equivalent is declination/right ascension respectively.

This was simply a matter of drawing line primitives on the surface of a sphere centered around the camera viewpoint. I chose not to converge all my lines of right ascension at the "poles", as shown below. The only drawback to this was that I had to draw multiple lines, since it's only possible to use just one if the lines converge.

Celestial Grid Equator

Figure 1. Celestial Grid Equator

Celestial Grid Poles

Figure 2. Celestial Grid Poles

One advantage of using multiple lines, however was that I have the option of varying the color for particular lines. For example, I might choose to make the "equator", or elliptical plane, more opaque.

Errm, where do I start in building a Virtual Solar System?

How about the orbits of the planets? At the start of the 17th Century, Kepler coined his laws of planetary motion. His first law defined the orbit of a planet as an ellipse with the sun at a focus, and the position on an ellipse can be defined by six unique pieces of data:

  1. Semi-Major Axis (a)
  2. Eccentricity (e)
  3. Inclination (i)
  4. Argument of Perifocus (ω)
  5. Longitude of Ascending Node (Ω)
  6. True Anomaly (θ)

These "Orbital Elements" for each body are available on the NASA JPL Horizons system.

The XNA Creators Club is a fantastic resource, with heaps of examples to get me started on a simple app to render orbital positions. Plugging in the data gave me the following results:

Inner Planet Orbits

Figure 1. Inner Planet Orbits

Outer Planet Orbits

Figure 2. Outer Planet Orbits

Once I had this basic framework for rendering orbits and positions I could add additional bodies, such as comets and moons as shown below

Outer Planet & Comet Orbits

Figure 3. Outer Planet & Comet Orbits

Outer Jupiter Moon Orbits

Figure 4. Outer Jupiter Moon Orbits

I finally decided that I should try to get my head around the XNA Framework. Why, do you ask? Well, I found myself tinkering on another WPF project, requiring a reasonable amount of 3D, spending a considerable amount of time performance tuning the application. I started to wonder "if I spent the time I would otherwise spend performance tuning this app on learning how to implement the same application using XNA, could I end up getting to the same level of performance using XNA in the same overall time?"

So I thought I'd blow the cobwebs off this blog and use it as a journal as I try to find my path into the scary world of Games Programming.

I've had in my mind for some time the desire to learn more about the dynamics of the solar system. Maybe I spent too much time playing Elite. I really struggle with names, so for the time-being until I think of a better one, I'll refer to this project as "Virtual Universe".

This post discusses a sample I put together to allow geospatial telemetry data to be visualised using Virtual Earth. The data itself was collected by driving an Aston Martin DB8 Vantage around a track with a GPS receiver. In addition to the location of the car, basic engine telemetry was captured and synchronised with the position data.

The basic idea was to take the data, and "play back" the drive of the car around the track, layering information on a map such as vehicle position, speed, braking position etc. Multiple data sets can be overlaid on the map for comparison. In order to show the vehicle position, a basic 3D car model was chosen. Virtual Earth supports both 2D and 3D map views, the latter of which gave an opportunity to implement a "virtual helicopter" camera which could follow the vehicle around the track.

Video 1. Virtual Earth geospatial telemetry visualisation.

This video shows a couple of laps of telemetry data. The path taken on each lap is drawn on the map (each in a different colour), and each has its own 3D car model (labeled "A" and "B" respectively). The buttons along the bottom of the screen control the "virtual helicopter" camera position and which car the camera is pointing at, and can be seen in more detail in Figure 1 below.

Camera Positions

Figure 1. "Virtual Helicopter" Camera Positions

Front/Rear

These angles follow the car a short distance above and in front of/behind respectively.

Above/Blimp

This angle is directly above the car at a low/high altitude respectively.

Fixed

This setting fixes the camera at its current point in space, but points to the selected car.

In Car

This setting sets the camera position to be "inside" the car, and points the camera in the direction that the car is traveling.

Free

This setting allows the user to freely move the camera.

As an aside, during development of this sample I initially only had access to a couple of models in .x format. Until I managed to find a suitable model for the car, I had to use the following:

Initial Actors

Figure 2. Initial Actors

Initially it was helpful to add some axes to the models so I could ensure they were oriented correctly - you can see these in Figure 2. I also experimented with transparency for "ghosting" the model(s) which didn't have focus:

Ghost Actors

Figure 3. Ghost Actor(s)

The cube shown in Figure 3 was used as a visual marker (also with axes) to show the camera position when I was in a "Free" camera mode. This was really helpful in ensuring the camera was positioned and tracking objects correctly.

In Part 1 and Part 2 I focussed on 2D features. This makes a lot of sense for Surface applications, as fundamentally items move in two dimensions, however there are particular scenarios that lend themselves to 3D, one of which I'll describe later in this post.

Video 1. Surface physics demo.

The video is divided into the following sections:

Layout

In some cases it is desirable to arrange the items into preset patterns, for example as part of an "Attract Mode" application, or when interacting with physical objects placed on the Surface. This screen defines some basic patterns and "locks" the items to the nearest position in the pattern from their original location. Selecting an item releases the "lock".

3D Rolling Spheres

Spheres lend themselves to an intuitive motion on a 2D-plane, such as when playing marbles, pool etc. When a texture is added to the sphere, it is important to ensure that the sphere "rolls" correctly when moved. Several examples of textures are shown in the video.

Here are some further screenshots.

Marbles

Figure 1. Marbles

Pool

Figure 2. Pool balls

In Part 1 I introduced a generic framework I have produced for a Surface-enabled WPF layout control which has basic physical interactions.

Aswell as demonstrating some additional physical behaviour, I wanted to focus this post on some Surface-specific features. One of several key tenets of Surface development is multidirectional applications. This is often overlooked, even when developing for Surface as the developer typically uses a standard development PC with a vertically-oriented screen. I should say that the radio buttons down either side of the demo aren't part of the framework I describe here - they are merely present to allow me to illustrate different features over several pages - so should be "ignored" when it comes to a discussion about multidirectional UI.

Let's jump straight into a video.

Video 1. Surface physics demo.

The video is divided into the following sections:

Further Materials

Other than adding some black and white "plastic" rounded tiles, I've included some "crystal" materials. These are 3D models of a typical facetted gem, with some suitable lighting and transparency. The colors are randomly generated each time the page is selected.

Spring Forces

I'd wanted to add these from the start, as they are great fun to play with. Whenever a "spring" tile (another poker chip) is placed in the Surface, any selected items are joined to it via a spring, or piece of elastic. Muliple springs can be connected to multiple objects. When combined with directional forces the springs will "swing" accordingly. A basic spring algorithm is used, with a configurable spring constant and length (quite "loose" and "short" respectively in this sample).

360° Directional Forces

This section illustrates how a "dial" object (you guessed it, another poker chip) can be used to control the direction of a force. When placed on the Surface, the current direction is indicated and can be changed by rotating the object.

360° Directional Lighting

In a similar approach to the directional forces above, a "dial" object is used to control the direction of the dominant light source in the model.

Here are some further screenshots.

Springs

Figure 1. Springs

Force direction

Figure 2. Force direction

Gem lighting

Figure 3. Gem lighting

Note that these screenshots are from the Surface Simulator, so the physical objects (i.e. spring object, force, and lighting dials) are necessarily virtual.

In the next article I'll discuss 3D features.