HDR Effects - Technology Prescriptions
One of the things I needed to start thinking about sooner or later was supporting High Dynamic Range (HDR) render targets, initially for the following reasons:
- I wanted to use a realistic bloom effect for the sun, without affecting other rendered planetary bodies.
- Atmospheric scattering algorithms lead to a wide range of luminance values.
- Background stars span a wide range of luminance values.
The use of HDR in combination with Tone Mapping to normalise luminance values into a standard render target would allow me to deal with each of these issues. This post will focus on the just the first issue of rendering the sun.
Non-HDR Approach
There are a number of approaches to rendering bright objects. One example is the lens flare sample on the XNA Creators Club site, which uses occlusion queries on the GPU to render glow and flare sprites when the sun is visible. Figure 1 shows the use of this technique.
HDR Approach
Another approach to rendering bright objects is based on image post-processing. The bloom sample on the XNA Creators Club is a good model for implementing bloom post-processing effects, along with additional control over color saturation. I combined this approach with an HDR render target (I used SurfaceFormat.HalfVector4), and added some HDR-textured spheres to represent bright lights. I rendered my scene as per normal and, using a post-processing bright-pass filter, extracted pixels for which the color range fell outside of the normal (0-1) range. I then used several passes of a Gaussian blur filter to create "star" and "bloom" effects and combined this with the original image, as shown in Figure 2.
I then applied this post-processing approach as an alternative method to rendering the sun, as shown in Figure 3 below.
Combinations of both approaches can also be used, as per Figure 4 below.
Using HDR render targets is an expensive process, even for many current GPUs. However, it has a number of advantages over the Occlusion Query approach, such as:
- Rendering one or many HDR-textured items in a post-processing pixel shader has the same GPU cost, unlike running multiple occlusion queries.
- Since post-processing effects are pixel-based, this approach leads to more realistic results when HDR-textured items are partially occluded.
In Part 1 I showed some screenshots of a planet rendered using multiple effects. I'll discuss each of these in turn, and begin with a composite image to show how each effect contributes to the overall image.
The first thing I needed to do was create a model. I could have used a predefined sphere, but chose instead to generate the mesh algorithmically so that I could easily control the number of vertices.
Texture
Once I had a model, the first thing I needed to do was to apply a texture. NASA has an extensive image library, and the Visible Earth site has a collection of land maps for Earth. These maps are Equidistant Cylindrical projections, so my texture coordinates were simply:
x = λ
y = θ
where
λ = longitude,
θ = latitude
Lighting
The Shader Series on the XNA Creators Club site is a great introduction to lighting. My initial lighting model was a simple per-pixel shader, with a single directional light source from the sun. I subsequently added specular reflection using a Phong shading algorithm.
Relief
In order to show surface features without significantly increasing the number of vertices in the planet model, bump (normal) mapping can be used. There are numerous sources of normal maps on the Internet, available in various formats (I'm using DDS), and a good sample of how to implement normal mapping in a shader can be found on the XNA Creators Club site.
Atmospheres
There are many discussions on the subject of atmospheric scattering, many of which reference the work by Nishita et al. The article "Accurate Atmospheric Scattering", GPU Gems 2, Chapter 16 by Sean O'Neil served as a good starting point and is available here.
Clouds
The Visible Earth site also has a collection of cloud maps. This texture is then rendered on another sphere model above the surface of the planet.
Shadows
It was an important effect to cast shadows from the clouds onto the surface, particularly toward the terminator where the shadows are longer and not directly below the clouds themselves. My first approach was to implement a sphere-ray intersection algorithm in a pixel shader to dertermine the surface position of a shadow cast from my cloud sphere, and subtract the result from the existing surface texture.
In order to render a realistic star background, I could either use an image texture (either mapped onto a sphere or a cube), or a set of dynamic points. I opted for the latter so that I could more easily support changing my field-of-view (i.e. "zooming-in") without losing detail on my background.
Numerous star catalogues are available in the public domain. I opted for the Hipparcos Catalog, which lists the positions of approximately 100,000 stars. I converted the catalog to XML and then used the Content Pipeline in XNA Game Studio 3.1 to compress the XML to an XNB file. The data can then be loaded at runtime simply by using:
BackgroundStar[] stars = Content.Load<BackgroundStar[]>("hipparcos");
BackgroundStar is a simple class containing information such as position, brightness, spectral type etc. for each star in the catalogue.
I was really surprised at the level of performance I got when rendering these items, initially as point primitives, and subsequently as point sprites. For the latter, I created myself a simple blurred sprite which I sized according to the brightness, and tinted according to the spectral type of the star. As an example, here's a screenshot of Orion taken with both a wide and narrow field-of-view.
One of the issues here is that the apparent magnitude of a star is a logarithmic scale. This means that the faintest stars visible to the naked eye (around magnitude 7-8) are approximately five thousand times fainter than the brightest star in the sky (Sirius, magnitude -1.46). The Hipparcos Catalog lists stars down to around magnitude 14, so in order to render this range of magnitudes with only 255 levels of luminance I had to flatten the brightness curve.
To make things much easier, I wanted to render a grid to help with orientation. In planetary terms this would be a latitude/longitude grid, however the celestial equivalent is declination/right ascension respectively.
This was simply a matter of drawing line primitives on the surface of a sphere centered around the camera viewpoint. I chose not to converge all my lines of right ascension at the "poles", as shown below. The only drawback to this was that I had to draw multiple lines, since it's only possible to use just one if the lines converge.
One advantage of using multiple lines, however was that I have the option of varying the color for particular lines. For example, I might choose to make the "equator", or elliptical plane, more opaque.
Errm, where do I start in building a Virtual Solar System?
How about the orbits of the planets? At the start of the 17th Century, Kepler coined his laws of planetary motion. His first law defined the orbit of a planet as an ellipse with the sun at a focus, and the position on an ellipse can be defined by six unique pieces of data:
- Semi-Major Axis (a)
- Eccentricity (e)
- Inclination (i)
- Argument of Perifocus (ω)
- Longitude of Ascending Node (Ω)
- True Anomaly (θ)
These "Orbital Elements" for each body are available on the NASA JPL Horizons system.
The XNA Creators Club is a fantastic resource, with heaps of examples to get me started on a simple app to render orbital positions. Plugging in the data gave me the following results:
Once I had this basic framework for rendering orbits and positions I could add additional bodies, such as comets and moons as shown below
I finally decided that I should try to get my head around the XNA Framework. Why, do you ask? Well, I found myself tinkering on another WPF project, requiring a reasonable amount of 3D, spending a considerable amount of time performance tuning the application. I started to wonder "if I spent the time I would otherwise spend performance tuning this app on learning how to implement the same application using XNA, could I end up getting to the same level of performance using XNA in the same overall time?"
So I thought I'd blow the cobwebs off this blog and use it as a journal as I try to find my path into the scary world of Games Programming.
I've had in my mind for some time the desire to learn more about the dynamics of the solar system. Maybe I spent too much time playing Elite. I really struggle with names, so for the time-being until I think of a better one, I'll refer to this project as "Virtual Universe".
This post discusses a sample I put together to allow geospatial telemetry data to be visualised using Virtual Earth. The data itself was collected by driving an Aston Martin DB8 Vantage around a track with a GPS receiver. In addition to the location of the car, basic engine telemetry was captured and synchronised with the position data.
The basic idea was to take the data, and "play back" the drive of the car around the track, layering information on a map such as vehicle position, speed, braking position etc. Multiple data sets can be overlaid on the map for comparison. In order to show the vehicle position, a basic 3D car model was chosen. Virtual Earth supports both 2D and 3D map views, the latter of which gave an opportunity to implement a "virtual helicopter" camera which could follow the vehicle around the track.
In Part 1 I introduced a generic framework I have produced for a Surface-enabled WPF layout control which has basic physical interactions.
Aswell as demonstrating some additional physical behaviour, I wanted to focus this post on some Surface-specific features. One of several key tenets of Surface development is multidirectional applications. This is often overlooked, even when developing for Surface as the developer typically uses a standard development PC with a vertically-oriented screen. I should say that the radio buttons down either side of the demo aren't part of the framework I describe here - they are merely present to allow me to illustrate different features over several pages - so should be "ignored" when it comes to a discussion about multidirectional UI.
Let's jump straight into a video.
'Multimedia' 카테고리의 다른 글
Tutorials from Evermotion - vray, 3dsmax, maya, photoshop, lightwave, modeling, XSI, maya (0) | 2012.05.04 |
---|---|
InfoEra - Undet point cloud processing software for object analysis, 2D drafting and 3D modeling - Undet for Point cloud (0) | 2012.05.01 |
Update 10.1.0 - April 3rd 2012 [Orbit Knowledge Base] (0) | 2012.05.01 |
Animation of OpenStreetMap Edits in 2008 (0) | 2012.04.22 |
Google Maps and Its Competitors (0) | 2012.04.22 |
Decade Engine (0) | 2012.04.14 |
The making of Transformers 2 (0) | 2012.04.08 |
Tribute to Transformers The Movie: 86 Inspiring Artworks (0) | 2012.04.08 |
NVIDIA vs AMD 워크스테이션 그래픽카드 (1) | 2012.04.05 |
Google Driving-simulator (0) | 2012.04.04 |