Display a perspective image in C#

[example]

A while ago someone asked me how he could display an image in WPF with one side smaller than the other to produce a faux perspective look as shown in the picture on the right. Unfortunately two-dimensional WPF (and Windows Forms) transformations aren’t set up to do this. They can translate, rotate, scale, and skew an image, but they always transform a rectangular area into a parallelogram so they won’t give this effect.

You could do this yourself by directly manipulating the image’s pixels, but that’s (A) relatively slow, (B) complicated in Windows Forms, and (C) VERY complicated in WPF. (In keeping with the unofficial WPF slogan, “Twice as flexible and only five times as hard.”)

The best way I can think of to create this kind of perspective image so it looks three-dimensional is to actually display it in three dimensions. The code isn’t terribly long, although it is confusing.

The example program uses the following XAML code to display the image with perspective.

<Viewport3D Margin="0">
    <ModelVisual3D>
        <ModelVisual3D.Content>
            <Model3DGroup>
                <!-- Lights -->
                <AmbientLight Color="White" />

                <!-- The image -->
                <GeometryModel3D>
                    <GeometryModel3D.Geometry>
                        <MeshGeometry3D
                            Positions="-1.125,-1.415,0 1.125,-1.415,0 1.125,1.415,0 -1.125,1.415,0"
                            TriangleIndices="0,1,2 2,3,0"
                            TextureCoordinates="0,1 1,1 1,0 0,0"
                        >
                        </MeshGeometry3D>
                    </GeometryModel3D.Geometry>
                    <GeometryModel3D.Material>
                        <DiffuseMaterial>
                            <DiffuseMaterial.Brush>
                                <ImageBrush ImageSource="algorithms.png"/>
                            </DiffuseMaterial.Brush>
                        </DiffuseMaterial>
                    </GeometryModel3D.Material>
                    <GeometryModel3D.Transform>
                        <RotateTransform3D>
                            <RotateTransform3D.Rotation>
                                <AxisAngleRotation3D Axis="0,1,0" Angle="-30"/>
                            </RotateTransform3D.Rotation>
                        </RotateTransform3D>
                    </GeometryModel3D.Transform>
                </GeometryModel3D>
            </Model3DGroup>
        </ModelVisual3D.Content>
    </ModelVisual3D>

    <Viewport3D.Camera>
        <PerspectiveCamera
            Position = "0, 0, 3.5"
            LookDirection = "0, 0, -1"
            UpDirection = "0, 1, 0"
            FieldOfView = "60">
        </PerspectiveCamera>
    </Viewport3D.Camera>
</Viewport3D>

Inside the main window’s Grid control, the program defines a Viewport3D. That basically acts as a window into a three-dimensional space.

The viewport displays a ModelVisual3D, which is basically an object that can render something in three dimensions.

That object’s Content element is a Model3DGroup, which is a group of 3D models. That object’s Content element contains the lights and actual objects we want to display.

If the scene contained surfaces with different orientations, such as the sides of a cube or the facets that make up a sphere, then directional lighting would give those surfaces slightly different shades so they would look more three-dimensional.

However, this example displays a single flat three-dimensional object so it will have the same lighting across its surface no matter how many lights the program uses. To make things simpler, the example only defines a single white ambient light. An ambient light applies equally to every surface in the scene no matter how it is oriented. This approach makes the example simpler. (Adding more lights and using directional lights are also more complicated for the rendering software so using a single ambient light makes the program more efficient. Although this is such a simple scene efficiency isn’t really an issue.)

Next the code defines a GeometryModel3D, which represents some three-dimensional objects. A model can be transformed as a group so in more elaborate programs you would want to use a separate model for each “physical” object in a scene. For example, if you wanted to draw a car moving across some ground, you would want the car to be one model and the ground to be another model.

The model’s Geometry property defines the model’s objects in 3D space. That doesn’t include the model’s material, which is defined shortly. The Geometry property holds a MeshGeometry3D object, which defines three-dimensional triangles.

The Positions property defines the vertices that make up the solids. This example displays a rectangle so this property defines its four corners.

The TriangleIndices property gives the indices of the Positions that make up the model’s triangles. The first three TriangleIndices values give the indices for the first triangle, the next three give the indices for the second triangle, and so on. To represent a rectangle with triangles you need two triangles so this example defines two triangles.

The TextureCoordinates property defines the coordinates in the material (defined shortly) for each of the vertices. The texture coordinates place (0, 0) in the upper left corner of the material with X and Y increasing down to the right with (1, 1) in the lower right corner. This is the same orientation you use to address the pixels in a bitmap, although texture coordinates always range from 0.0 to 1.0.

Note that you don’t need to assign texture coordinates if the material has a solid color. You only need to do this if the material colors its triangles with an image.

Having defined the model’s geometry, the code then defines its material. This example uses a DiffuseMaterial, which shades surfaces depending on their orientation with respect to directional lighting sources (which this example doesn’t use). This is the kind of material used by most models.

This example gives the material a brush that contains the image of my latest algorithms book cover. The texture coordinates map points in the model’s triangles to points on the brush. The result is an image of the book’s cover.

The last part of the model’s definition is a transformation that rotates it around the Y axis so it’s turned a bit to the side.

The example finishes by defining the viewport’s camera. As you can probably guess, this determines the position from which the viewport views the scene defined by the model. In this example the camera is located at position (0, 0, 3.5) and is pointed in the direction <0, 0, -1>.

The example uses a PerspectiveCamera so the final image uses perspective. Alternatively you could use an OrthographicCamera, which flattens the result so there’s no perspective. In this example the result is just a normal image of the book cover so it’s not very interesting. In a more complicated scene an orthographic camera can be useful because it preserves distances in any given direction. For example, if something far away from the camera is the same size as something close to the camera, then it will appear that way in the final image.

As I said, the code isn’t terribly long, although it is confusing. If you just want to display a single object with perspective as in this example, you should be able to reuse this code and just plug in the new image. If you’re going to do something much more complicated, you probably need to build a full three-dimensional scene.


Download Example   Follow me on Twitter   RSS feed   Donate




About RodStephens

Rod Stephens is a software consultant and author who has written more than 30 books and 250 magazine articles covering C#, Visual Basic, Visual Basic for Applications, Delphi, and Java.
This entry was posted in drawing, graphics, wpf, XAML and tagged , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *