Parliamentary Constituencies

With the recent release of the OS Free Data and the up-coming election, I’ve been looking at Parliamentary Constituencies boundaries. It’s not clear from the accompanying documentation which boundary set the OS Free Data is based on, but the following image should clarify things. This is from the OS Free Data:

OS Free Parliamentary Constituencies

Now compare that to the PCON 2010 dataset that I obtained from the Boundary Commission:

PCON2010 Boundary Dataset

This is the set of boundaries being used for the upcoming election as you can see that “Hammersmith and Fulham” has split into “Hammersmith” with “Fulham” being joined with “Chelsea”. This can be verified on the UK Parliament website and matched to their list of constituencies being contested in the General Election.

This means that the OS Free data is either based on the 2001 or 2008 (see National Statistics Westminster Geographies) boundary sets. It also doesn’t help that the Boundary Commission changed on 1 April 2010 from being part of the Electoral Commission to a new department called the Local Government Boundary Commission for England. This also raises the issue of the Irish political boundaries as we don’t currently have any access to them, but could make a substitute set of boundaries from postcode data.

Now that all the constituency boundaries are sorted out, we’re planning to had more electoral maps to our MapTube website, which will be at the following address:

http://www.maptube.org/election

UK Snow Dec 2009 to Jan 2010

UK Snow Dec 2009 to Jan 2010
UK Snow Dec 2009 to Jan 2010

There has been quite a lot of interest about using Twitter to crowd source snow amounts, but I couldn’t help wondering how this compares to the data available from the Met Office observing network. Over the period from 20 December 2009 to 15 January 2010 I downloaded the synoptic information from the Department of Atmospheric and Environmental Sciences at the university of Albany. I then processed this using a GTS processor that I wrote, putting the decoded reports for the UK into a postgres database. Then it was a simple matter to write a Java program to interrogate this database and generate a kml file:

KML file of UK Snow Depth

If you want to see what the original data looks like, then I’ve placed a CSV file at the following link:

UK Snow CSV

Although I could only get data for the hours 06:00, 12:00 and 18:00, the current snow depth will display in the KML animation until the next observation is available. This information is available for every individual hour, only not on the international feed from Albany. The data shown is snow depth on the ground in centimetres from the “sss” group in section 3 of the Synop. The height of the bars represent the depth of the snow, with colours as follows:

  • 0-4 cm blue
  • 5-9 cm cyan
  • 10-14 cm green
  • 15-19 cm yellow
  • 20 cm and over red

Importing GPS Tracks into 3DS Max

A GPS track imported into 3DS Max from a GPX File
A GPS track imported into 3DS Max from a GPX File

While playing around with 3DS Max 2009 for some of our GENeSIS work, I happened to notice that it’s now possible to use .net assemblies in MaxScript. My first thought was to use this for some of our agent based modelling work, but when Fabian Neuhaus asked about importing GPX files, I saw a really easy way of doing this.

The “System.Xml” assembly in .net makes parsing the GPX file extremely simple. A GPX file is nothing more than an xml file containing a list of trackpoints with a lat/lon and a time. The following script parses a GPX file and generates an animation of a box following a spline which follows the GPS track:

Maxscript GPX Importer

In order to use this, you have to run the script from the MaxScript rollout on the Utilities menu (click the hammer on the right hand side). Then click the “MaxScript” and “Run Script”. Point the file dialog to the file dowloaded from above and it should run.

The script creates a rollout window which allows you to browse for a GPX file to upload. After this is done, the file will be imported, resulting in an “Import Successful” message.

The only problems you might get are to do with the format of time recorded by the GPS in the track. If the import refuses to work, then you might need to change the time format as indicated by the comments in the MaxScript file.

One other thing worth mentioning is that the lat and long coordinates have been multiplied by 1000 in order to cope with a lack of granularity in Max. After producing this version of the script which loads data in the WGS84 coordinate system, I then created another version which reprojects the data into the OSGB36 system that Ordnance Survey uses in the UK. This means that we can match up the GPS tracks in Max with our own data on building footprints which comes from Ordnance Survey.

For movies showing the animated GPX tracks, have a look at the Urban Tick website:

http://urbantick.blogspot.com/2009/09/gps-tracks-running-in-3ds-max.html

http://urbantick.blogspot.com/

Genesis: How to Build a Planet

John Conway’s “Game of Life” was one of the first things I ever wrote in Java, back in the days when we were using 1.1. This is a slight variation on the traditional 2D view, where the alife simulation is wrapped around a spinning globe. The results are shown below, along with the link to the web page containing the applet.
Conways Game of Life
Conway's Game of Life

 http://www.casa.ucl.ac.uk/richard/demos/planet/GameOfLifePlanet.html

The way this was created was as follows:

Step 1 – Create the Planet Mesh

I’ve defined my axes with x to the right, z up and y into the screen. This is slightly unusual, but maps to the ground plane which was originally XZ.

for z=0 to numZ-1
    for x=0 to numX-1
          ax=x/numX*2.0*PI-PI;
          az=z/numZ*PI-PI/2;
          cx=radius*cos(az)*sin(ax);
          cy=radius*cos(az)*cos(ax);
          cz=radius*sin(az);
          coords[x][z]=new Point3D(cx,cy,cz);

The mesh of points can be wrapped around in the x direction, but not Z, so we need an extra line of points at the South pole. I’ve also taken the radius to be 1.0 as using the unit sphere simplifies a lot of the graphics calculations that follow.

This gives you the following result:

Planet Mesh
Planet Mesh

Step 2 – Spin the World

Next I added an animation thread that increments ‘A’, the angle of rotation of the planet. In the rendering code for the mesh I rotate the points to spin the planet around the poles. The interesting thing here is that you don’t need the Y coordinate as there’s no projection, so that saves a few operations.

xp[i]=radius*(xpoints[i]*cos(A)+ypoints[i]*sin(A))
//yp[i]=radius*(xpoints[i]*sin(A)-ypoints[i]*cos(A))
zp[i]=radius*zpoints[i] 

The back faces have been removed by using the direction between the surface normal and the viewer. Any face pointing away from the viewer is not drawn.

Step 3 – Add the Game of Life Simulation

I already had an implementaion of this in Java, so I just pulled it into the project. The following wikipedia article contains everything you could every need to know about Conway’s Game of Life:

http://en.wikipedia.org/wiki/Conway’s_Game_of_Life

Then it’s just a case of running the ALife simulation and linking the output to the cells in the planet mesh. The grid used for the Game of Life simulation and the mesh making up the planet are the same size, so there is a simple one to one relationship.

Game of Life Planet
Game of Life Planet

 

Step 4 – Lights

To improve the realism, I added some lighting using Lambert’s cosine rule. The direction of the light is [-1.0, 0.0, 0.0] which makes the intensity calculation straightforward. The light is assumed to be far enough away that the direction of the light is constant over the whole object. The planet is a unit sphere centred on the origin, so the normal to the surface patch is just a ray through the origin and the centre of the patch. I’ve actually taken the top left corner to save having to calculate the centre point, but it doesn’t make much difference to the effect.

According to Lambert’s cosine rule, the intensity of the patch is proportional to the cosine of the angle between the surface normal and the light. We use the dot product of the two vectors to get the cosine of the acute angle between them. As both vectors are already normalised beforehand, we don’t have to normalise them ourselves.

In this view, any game of life cell that is ‘on’ is drawn in blue, while any that are ‘off’ are white. The white colour uses the diffuse lighting while the blue is drawn as emissive so you can see the patterns as they go around the dark side of the globe. I’ve also added a line at the 0 and 180 degree longitude positions so you can see the planet rotating.

Here is an image of a “Gosper Glider Gun” about to shoot gliders at itself from around the other side of the planet. The applet link below contains a number of the more common patterns.

Next Steps:
The mapping of the life grid to the planet mesh could do with some improvement. Anything moving east or west maps around the sphere correctly, but anything moving through the north pole reappears at the south and vice-versa. There are better ways to map grids onto spheres, but that’s for next time. I also have an erosion-based model that I wrote a long time ago to create realistic looking land and water masses, which this was was originally intended for.

Crowds and Delegate – Emergent Behaviours

Crowd, transport and urban simulations are at their roots down to ‘Agents’ or ‘Objects’ that are assigned a set of rules as to how to moves in relation to both the environment and other agents around them. 3D Studio Max has a built in ‘Crowd and Delegate’ system which can be used to assign behaviour and therefore create realistic traffic of pedestrian systems in 3D space.

The movie below displays our first tentative steps to explore emergent behaviour via the introduction of simple rules. The movie starts out with a basic ‘wander’ behaviour where the agents only knowledge is the shape of the surface. Moving on we assign each of our ‘cubes’ (of which we have become quite fond of…) a level of vision so they can see ahead and therefore avoid each other and objects in their environment.

Crowd and Delegates – Emergent Behaviour from digitalurban on Vimeo.
Thirdly, the agents seek a ‘sphere’ which could be viewed as a source of food. While being aware of each other and tweaking the way the cubes move a swarm behaviour emerges. Finally, we introduce competing groups with two priorities, firstly to eat and secondly to stay as a group. The majority choose the group over the food but a couple stray off in search of sustenance and lose the other members.

All of these models are going into our exhibition space previewed below to allow a step by step guide to the principles of agent based modelling.

The virtual exhibition space should be online for Windows and Mac in the next few weeks.

From Tile Pyramids to Population Pyramids

It’s actually a stacked bar chart rather than a traditional population pyramid, but the image below shows male/female population by age for all the output areas in England. The red thematic overlay is total population for every OA, which can be clicked to get the age group breakdown shown in the popup window.

Clickable Age Map
Clickable Age Map

This map is a variation on the original clickable OAC map and was built using a new version of the GMapCreator which contains the clickable technology. Traditionally, maps like this have been built using a server and database to translate the click on the client into a geographic area using point in polygon and then sending the query data back to the client. This method doesn’t scale when you have limited server resources and are looking to handle high numbers of hits, for example with the Mood Maps that we’ve been doing recently. An alternative solution is to build feature coded tiles and let the client handle most of the work displaying the data. Using this system, there is a second set of tiles, one of which the client downloads when the user clicks on a point. This allows the client to work out which feature has been clicked and request the data for that area as an xml file.

The hard part is designing a system which can allow people to design the popup window without having to resort to programming. In the example above, the graph was created using Google Charts via the GMapCreator’s user interface. All that was needed was to choose the data fields from a list and to select the chart type. The URI string to create the chart comes from an xslt transform applied to the xml data. This transform is automatically created by the GMapCreator interface, which also allows the rest of the popup window to be designed using a simple html editor.

Downloadable Preview – GENeSIS Exhibition Space

The following exhibition space is a proof of concept, looking at the ability to share and display city datasets and simulations within an interactive game engine. Available for download on both the PC and Mac (intel) platforms the space is the result of a few days work with the Unity Engine, it is intended to be viewed in the spirit of development rather than a completed product.

The room includes our first ‘crowd and delegate’ models direct from 3D Max, created as basic wander and avoid simulations they provide the building blocks of emergent behaviour within the cityscape.

City wide data sets can to be honest be very ‘dry’, the whole point of digital urban is to look at new ways to outreach, visualise and ultimately communicate urban data. The ability to include 3D models via ESRI ArcScene is a notable step forward, pictured below is the retail and office space in London measured on a 500m grid. We note some polygon issues here but these are known and we think we have a way to fix them – its to do with the way ArcScene exports, the model forms the centre of the exhibition space:

The room features various architectural models, including the Swiss Re building and the GLA in London, it also features a number of our latest output movies, the London LiDAR and Second Life Agents are of particular note.

The model is, as we mentioned, proof of concept, the next step is the addition of themed rooms and a more organised structure. We think the concept of virtual exhibition spaces is a strong one, so as ever any comments are most welcome…

Download the model for Windows XP/Visa (221 Mb zip file)

Unzip the file, open the folders and run the .exe file.

Download the model for Mac (222 Mb zip file)

Extract and simply run the .dmg file.

Use the mouse to look around, W/S move forwards/backwards, Space to jump.

GENeSIS Exhibition Space

We have spent the last few days giving the game engine Unity a spin. The pro version has various additions to the lower cost indie edition, most notably for our use the ability to import movies as textures and use dynamic shadows. Our movie below provides an update on progress:

Unity: Creating a City Exhibition Space – Update 2 from digitalurban on Vimeo.

The aim is to create an exhibition space exploring simulation and the city… more to follow.

Visualising City Datasets

The movie below is a visualisation of office and retail space in London. Using data kindly supplied by the Economics Unit at the Greater London Authority, Duncan Smith a PhD student here at CASA has calculated the amount of retail and office in London per 500 metre grid square:

Duncan carried out the analysis in ESRI’s ArcScene as part of his PhD. Intriguingly it is possible to export from ArcScene into Autodesk’s 3D Studio Max allowing a much higher level of visual fidelity. Of course once it is in max you can then export to a number of other platforms, such as Unity as our previous post on digital urban explained using the same data.

Here at CASA we have just completed the initial phase of our London database, as such we will be exploring more ways to visualise city based datasets in forthcoming posts.

Tiled Maps without the Internet

This is one of those things that’s obvious once you know it, but I’ve often found myself developing code for tiled maps, but without a connection to the Internet. Often, I just want a quick check to see if the tiles are rendered correctly, so I don’t need the background map.

The obvious solution is to create an OpenLayers page with your custom tiles as the base layer. The javascript that makes OpenLayers work can be served locally, unlike the Google API, so by only having one layer of locally served tiles, you don’t need an Internet connection.

The html pattern follows the OpenLayers ‘howto’ guide and uses a custom TMS layer as follows:

var googlecustom=new OpenLayers.Layer.TMS(“Test”,http://www.casa.ucl.ac.uk/googlemaps/,{ ‘type’: ‘png’, ‘isBaseLayer’: true, ‘getURL’: TMSCustomGetTileURL });

The ‘TMSCustomGetTileURL’ returns the tile url based on the x, y and z value in whatever format you are storing tiles in. For this project, it was the keyhole string format.

OpenLayers map showing the BBC Look North Recession data using dynamic tile creation
OpenLayers map showing data from the BBC Look North Recession mood map using dynamic tile creation

The image above was taken from a prototype system using C# and SQL Server 2008 to generate tiles dynamically from data stored in a CSV file at a URL.