Now lets go to QGIS connect to the PostGIS instance add the tables and create some test data manually.
Here I have added fields in green with bold number labels and plots in brown with smaller number labelling. The numbers represent the pkid fields.
Now here I can quickly run a query to identify the plots that are in fields
And it correctly identifies that plot 1 is within the fields layer.
But what would be great in an application is to have some kind of junction table that individual master records could display their children on. For this we need a junction table that links between the field and plots table showing the pkids from each.
SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
Now I will move plot 2 into field 3 and rerun the above.
The layer now looks like
and running the former query we get.
Now its possible to either create a junction table to hold this information..
eg CREATE TABLE t00010fieldplotjunction AS
SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
or we can create a view that will constantly calculate this everytime it is seen CREATE VIEW v001FieldPlotJunction AS
SELECT t00002plots.pkid as Plotspkid,t00001fields.pkid as Fieldspkid
Now if I add a few more plots and fields and then pull up the view we shall see that everything has been adjusted
and running the view we now get
In some circumstances this calculation may be expensive so we may wish to run and create a junction table overnight other times we may be happy to do it fully dynamically. Of course in a front end you could query and filter such that only one record was compared against the fields plot at anytime. Very useful nonetheless.
In 2017 I was involved in an important work project to transfer all the records in a legacy system that was being deprecated by the vendor into another maintained system. We were in some ways fortunate because both systems had been designed by a single company and they were encouraging us to transfer. We had delayed transfer for several years already but were aware that we now had to move. The vendor did have some tools in place , had staff dedicated to such transfers and were offering favorable consultancy rates. The amount of data was not horrendous in computing terms but they were far far beyond the remit of the ability to cope with any sort of manual data correction and the system was an absolute core system upon which several departments completely depended. These were systems that all departments are in from the moment they start the work day to the end. Generally its unusual if they are down for more than 5 minutes in a month, all work pretty much stops when they stop and in no circumstances could they be down for more than a day without special dispensation and coordination to indicate to manage customer expectations.
The whole project was a success although it was challenging. Here is an outline of the steps we took. As ever order here is important in most of the steps.
Inform managers of all involved sections and ensure they are on board – identify and ring fence budget
Appoint project manager on vendor and client side
draw together team to perform transformation.
Draft time table creation of how long it will take putting in place planning for tutorials on systems and consultancy.
Request managers to put forward staff on all sides willing to be involved
Identify any omissions in knowledge and start to identify how this can be remedied. Kick off and complete acquisition of said staff.
Meeting with lead staff to confirm buy in. Request buy in from staff including ring fencing of holidays etc.. to ensure key staff are available at required times.
Set up test systems that all individuals have access to and ensure that the old and new systems can be viewed simultaneously by individuals. Ensure that the domain specialists can identify processes that will be mirrored from the old system to the new system
Give DBAs or those that will be doing data transfer access to databases of source so that they can start thinking of how they can pull out information.
Training for all individuals concerned in new systems.
In new system start tasking individuals with how they are going to do the simple processes – eg register a record approve a record alter a record and get reports out. If possible allow new champions to start to define things like reports.
Start making up any new lookup fields compared with old lookups and also start tasking individuals with creation of reports and letter that will need to be done.
Start mapping the data from old system to new system – excel spreadsheets can be used for this that show the data going from the old system and what fields they are going to go into in the new system. Divide this task up between domain users – this step needs to be done after old and new systems are on domain users machines. As part of this the applications in question should expose if possible the table and field names of the source and target fields. With the systems we were involved in this was possible both for the old and new systems.
For each form on the two systems try to identify the below
Source table.field Target table.field
Also get them to map the lookup table values if direct transfer is not possible or if alias id are used in these lookups.
Give both mapping documents to the ETL people to allow them to start writing the queries. It is unlikely that there will be a straight transfer across from table to table. While it would be expected that field and table names will be completely different it will be expected that table structure will in certain places be different in this respect it would be good to have a really nice schema diagram of both source and target.
Allow data individuals to write scripts that can be run live against present initial system – if necessary doesn’t need to be live live could copy every night and then perform on 1 day old database backend – which is what we did. This means work can go on in old system and then at a touch of a button.
Encourage DBAs to be able to run these scripts every day to ensure that running them for go live is absolutely no issue. Our scripts only took about half an hour to run so this wasn’t an issue. I was personally involved in writing the SQL for those and I had systems in place to cross tab the amount coming into each new table so I could see new records and information from the old system trickling manually into the system and then being transferred.
Test data input into new system
Check test data input into new system with reference to domain users.
Confirm go live date ensure staff available for issues
Go live to production and start all new procedures ensure staff technical and domain key players on hand to make flexible solutions to things
Project review on going maintenance and improvement of new system
After suitable time turn off of old system if possible.
I wanted to be able to run thousands of queries or hundreds from Postgres like I can in MS Access this didn’t turn out to be too difficult.
Here’s something that works firstly lets create a new database
CREATE DATABASE sqlloopdb;
You will then need to connect to the database.
Next I will create 2 tables with; One table to be called t001sqltarget – this is the table we shall change with queries. One table called t002sqlrun – this will contain queries that we will run.
Please note the field names are important as well but I will let you study them in the code.
I then have 4 inserts that place valid SQL strings into the field sqltorun.
CREATE TABLE t001sqltarget (pkid serial primary key, fieldforupdate varchar(1));
CREATE TABLE t002sqlrun (pkid serial primary key, sqltorun varchar(1000));
INSERT INTO t002sqlrun(sqltorun) values ('INSERT INTO t001sqltarget(fieldforupdate) values (1);');
INSERT INTO t002sqlrun(sqltorun) values ('INSERT INTO t001sqltarget(fieldforupdate) values (2);');
INSERT INTO t002sqlrun(sqltorun) values ('INSERT INTO t001sqltarget(fieldforupdate) values (3);');
INSERT INTO t002sqlrun(sqltorun) values ('INSERT INTO t001sqltarget(fieldforupdate) values (4);');
First lets run the above and see what we have. Below you can see that I create the database then connect to it before opening the editor from which I run the above code I then take a look at the tables in the database and run a select to return all the records within the t001sqltarget table of which there are none.
Now lets run the following code and then take a look at t001sqltarget.
FOR stmt IN
SELECT sqltorun FROM t002sqlrun
And after running there are 4 lines in the table
Every time I run the Do code four more records will be added to this table. Any SQL could be included in t002sqlrun and this is a nice demonstration of what I had previously able to do in MS Access and is massively powerful. It could be used for instance to calculate multiple measurements.
Here we take much of the work covered in post 010 and take the parts and user st_union to merge into a single record and place it in a table created by transforming a view into a table
Firstly go to your psql line and ensure that you are logged in with a username that you wish to be the owner of the table. In my case general
Now same measurement as before but this time we shall make a view out of the measurements then load that into a new table before deleting the view leaving us with the table with a combined measurement.
CREATE VIEW v001firstmeasurement AS SELECT seq, id1 AS node, id2 AS edge, cost, geom, agg
FROM pgr_dijkstra( 'SELECT id, source, target, st_length(geom) as cost FROM public.t01roadnetwork', 15883, 10967, false, false ) as di
JOIN public.t01roadnetwork pt ON di.id2 = pt.id ;
CREATE TABLE t003 as select sum(cost), st_union(geom) from v001firstmeasurement;
DROP VIEW v001firstmeasurement;
It is important in notepad to remove the blank spaces in the editor this looks as follows.
We then should then get some kind of confirmation that the view and table are created before the view is then dropped again. There might be a more efficient way of doing this but this was my first experiment.
And we can go back to QGIS 3.4 and display the now single line in our project.
Complete with now accurate measurement.
It should be noted that if you were wanting to do multiple line measurements you would need to step out of the create statement and use an insert statement for all subsequent insertions as follows.
insert into t003(sum,st_union) select sum(cost),st_union(geom) from v001firstmeasurement;
This would allow you to do multiple measurments.
I haven’t added up the measurement but it looks about right.
I had been using the 2010 WordPress standard theme as the basis for Round Up the Usual Suspects but decided it was time to upgrade. I decided to go for the 2016 WordPress standard theme as it is so well tested and with a relatively large back catalogue my primary concern was that I could port everything forward as easily as possible. I will be working on making it as user friendly as possible.
Objective here is to write a series of queries that can be used to measure the shortest distance between selected paired locations on a network such that the geometry of the routes can be calculated and displayed on a map.
For this particular tutorial you will need – QGIS 3 or higher and a version of Postgres I am using version 11.0 here (I have upgraded since my former posts). I believe this tutorial will work with previous versions but if you are following along now might be a good time to upgrade.
QGIS 3.4 or higher – needed as the Ordnance Survey road network geometry contains a z coordinate which will prevent the creation of the required geometry for measurement. QGIS 3 introduced the ability to save geometry excluding z coordinate. If you have a network without z coordinates you should not require this.
So let us first get the data. Here you tick the option in the top right hand corner – scroll to the bottom and submit your request after which you will be asked a few basic questions along with email address you wish the download to be sent to after a few minutes you should be sent the download link through your email – follow the instructions and you should be able to get the information
The information you are downloading is a block framework for the whole of the uk. When you unzip the download into a folder you will see multiple files. We will be using a section of the national dataset relating to Edinburgh – NT. Choose the block or selection that you are interested in. More blocks may take more time however.
Open QGIS Create a new project : eg EdinburghRouting.qgz Load in your chosen network block : eg NT_RoadLink.shp
Select the layer you just loaded in : eg NT_RoadLink.shp
and navigate to the following in the menu settings Layer / Save As
Fill out the Save Vector Layer as … dialog box IMPORTANT – ensure within the Geometry section Geometry type is set to LineString Include z-dimension is unticked
Give the new file a name : eg ntosroutingnetwork.shp
Within the layer dialog of QGIS your new layer should appear you can now remove the for NT_RoadLink shape file from the project
Next go to your version of PostgreSQL and using a superuser account create a new database : eg edinburghrouting
I would suggest you use lower casing as well
As a superuser ensure you add the postgis and pgrouting extensions.
Next I set up the following connection between the QGIS project and PostgreSQL
Personal tastes may vary but I like like to select Also list tables with no geometry Allow saving/loading QGIS projects in the database
OK the selection and you should now have a connection to the new database you just created.
QGIS has an excellent dbmanager window which we will use to load our new shape file which excludes the z layer into the new database we created in PostgreSQL
Ensuring that you have a connection to your localpostgis database hit the
Here I load the information into a new table t01roadnetwork
On pressing OK there will be delay after which if things go well you will receive the following message.
As ever it is good to check that things appear to be going well. Add the layer to your project and determine approximately whether import was successful.
Next back in psql command line and in an editor we are going to run 4 queries The first 2 add columns that are required in the shortest distance algorithm we shall use, the third will allow anyone to write an aggregation function to see the total cost of the route and the last creates a topology for the road network.
alter table public.t01roadnetwork add column source integer;
alter table public.t01roadnetwork add column target integer;
alter table public.t01roadnetwork add column agg smallint default 1;
select pgr_createTopology('public.t01roadnetwork', 0.0001, 'geom', 'id');
If things go correctly you should see the database engine start to create the topology and what I see is it gradually stepping through the creation process.
and on completion you should have something like the following:
A new table has been added to the edinburghrouting database and next step is to display the network and its vertices. In QGIS.
In QGIS we should see something like
The next thing that I like to do is to label the nodes so that for quick identification.
And look to the t01roadnetwork table and see if the columns are clear and present.
We are now ready to make a measurement. Here I choose the nodes 15883 and 10967
SELECT seq, id1 AS node, id2 AS edge, cost, geom , agg
'SELECT id, source, target, st_length(geom) as cost FROM public.t01roadnetwork',
15883, 10967, false, false
) as di
JOIN public.t01roadnetwork pt
ON di.id2 = pt.id ;
Now we can load this as a new layer and then improve the symbology
Doing this we get.
It should be noted that the line you see is a collection of lines. In my next post I will go through and indicate how we can amalgamate that into a single line for storage in a table.
Congratulations if you have got this far you should be able to measure the shortest distance between any two points on a valid network by altering the numbers.
Note: As of PostgreSQL 9.1, most procedural languages have been made into “extensions”, and should therefore be installed with CREATE EXTENSION not CREATE LANGUAGE. Direct use of CREATE LANGUAGE should now be confined to extension installation scripts. If you have a “bare” language in your database, perhaps as a result of an upgrade, you can convert it to an extension using CREATE EXTENSION langname FROM unpackaged.
As per the nature of recursion a function is a variable is a function.
Previously in 005 and 006 we wrote functions that returned subsets of queries they were effectively dynamic queries where I entered a parameter that was used in a select query. This effectively meant that although the function was returning a variable this was a query of a select statement.
What if we wish just to return a single value as in say translate centigrade to fahrenheit or some other calcuation.
In such case you simply state the function should return a variable and you state the variable type.
CREATE FUNCTION add(integer,integer) RETURNS integer
AS 'SELECT $1 + $2;'
RETURNS NULL ON NULL INPUT;
There are a few interesting things here which should be born in mind
In this case the addition is performed in SQL
I have to specifically name the language of the calculation (SQL) which suggests that if you stated another language it might accept it!
You still need to select the function to run it. This indicates that postgres 9.5 doesn’t executve functions as per some environments (ms access for example) – I have read that version 11 changed this and allows you to execute or perform a function. Ms Access you don’t even need to write execute simply the name of the function with the integers.
Variables are refererred to as to their input position unlike VBA where you dimension the variable and give it a name. I am unclear at the moment the advantages of the former or later but it is interesting nonetheless. I first came across something similar with autohotkey.
In 005 when last we left our intrepid explorers we were wondering if having defined an inline table that contains the definition of the selection purely in the user defined function how do we see what that selection is as it might not be a presently defined object. SetOF references an object we can execute independently of the function Table() – does not.
Well apparently magically you can run the following.
Like all platforms it is possible to create bespoke functions in Postgres
For the following I assume;
1. Postgres 9.5 is installed with the server running (syntax should be the same for other versions)
2. A database called exampledb has been created
3. In this database there exists table called t001landparcels with some records and a field called PKID
4. You are logged into the exampledb with a username called general that has been granted CREATEDB role.
5. You are in psql
The following can be used to create a simple function
CREATE FUNCTION getrecords(int) RETURNS SETOF t001landparcels AS $$
SELECT * FROM t001landparcels WHERE pkid <= $1;
$$ LANGUAGE SQL;
If you have been careful and done this exactly the same with my initial assumptions it should return
This tells us that the functioned has been created this will now exist in the schema the table is as a permanent addition.
We can identify the function by either listing all the functions and scrolling through
or listing the individual function
This should return your newly created function
Now to run the function - unlike MS Access you can't simply run the function you need to allocate it to a select statement.
SELECT * FROM getrecords(2);
This should return everything you are looking for.
Now you should be able to drop the function using the following SQL
DROP FUNCTION getrecords(int);
Note how you have to define the function with its parameter I have read (no idea whether its true) that in version 10 of postgress you can simply use
DROP FUNCTION getrecords;
Writing code in psql does require accuracy so getting things to work does usually involve some experimentation. I have removed much of this from my screenshots!
An alternative is as follows
CREATE FUNCTION getrecords(int) RETURNS TABLE (pkid integer, parcelname text) as $$
SELECT pkid, parcelname FROM t001landparcels WHERE pkid <=$1;
$$ Language SQL;
This appears to result in the same answer I am not clear what the difference is yet - note the result would have been the same if I had defined the table with the additon of a geometry column.
Note I dropped the old getrecords function before I created this one. Not sure what would have happened if I had tried to create one over the other.
I found this second method in stackoverflow when investigating functions with the following to me slightly mysterious quote
This is effectively the same as using SETOF tablename, but declares the table structure inline instead of referencing an existing object, so joins and such will still work.
Which sounds to me as important but I'm struggling at present to understand its meaning!
For this you will need to have a version of Postgres Database engine installed and running and you will need to have created a database which has the PostGis extension installed.
Login to the database you wish to create the table in
type the following CREATE TABLE t001landparcels (PKID SERIAL PRIMARY KEY, PARCELNAME VARCHAR(50), GEOM GEOMETRY(POLYGON,27700));
Here I do this and then check on the tables with the \dt command before inspecting the columns itself using the \d command.
and here I open up QGIS and link to my local postgres instance and the exampledb database;
and here I connect to it and draw a polygon. If you are wondering where it is this is InchKeith in the Firth of Forth and island very visible from George Street in Edinburgh. If you have flown into Edinburgh you will have flown almost over it.
and here after having digitised a single polygon I look at the contents of the table
SELECT count(*) FROM t001landparcels;
Produces the more helpful count of records in the table.
I am just getting into PostGres and here are some rough notes for my reference.
Assuming you have a postgres admin account you want to sign in first of all and create a database
To find the command line go to search in Windows and type psql
Ensure that your postgres engine is running firstly
You should be presented with the following
There are default connections to the local host keep hitting these initially until you reach the following;
You will now need to know your password enter it here and press return
I will deal with the warning message further in the post – but a lot of people experience this message so I wanted to keep it in at present.
From my initial investigations as ever it is a good idea to restrict users to particular privileges at this level I am not being particularly refined – I would like to have a poweruser role that I can allocate to people and give this a defined password.
Signing in you can check out the roles and users as follows – on starting up a new instance you may well see something like this \du
So here I try and set up a user called general and create a role called general which I give create DB rights
I would recommend something stronger than the classic password password.
Issuing the \du command again we get to see the roles
Now we can close down and go back in but this time login as username general by altering the appropriate item when asked.
Note how the =# characters have been replaced by => this appears to denote the non superuser sign in.
To identify what username you are logged in as type \c at the prompt
My investigations suggest that the # sign denotes that you are logged into the database as superuser.
So first of all lets get rid of that annoying warning message when you log in at psql
I am running Postgres version 9.5 your version may vary but you can remove the warning by editing runpsql.bat file every version of postgres has this file and it will be located in the version equivalent directory to 9.5 for me.
Add the line
cmd.exe /c chcp 1252
as per underline and save the file
Now fire up psql as usual you should get the following
It should be noted that if you REM out the SET statements you can choose login with particular server / localhost / database / port and username presets which may be useful if you find yourself constantly going into a particular database as a particular user.
Here you see that the warning note has stopped.
It should be noted that using the general username you will NOT be able to create Databases
In order to CREATE databases you will have to be signed in with a username with sufficient privileges here I am in as postgres and I create a database called ExampleDB
You can see that on carrying out a successful command we usually see a repeat of the command.
To get a list of all databases in the instance type
It can be seen that despite typing the name in capitals the database has been created in lower case this seems to be a feature of commandline. If you desperately want capitals you may need to go to pgadmin tool.
As part of the general move towards the web I continue to investigate and learn about web development. An important aspect for any developer considering how to serve programs to clients and colleagues with as little resistance as possible – is Speed – users will be clicking these things potentially tens of times a minute and waiting to go from one screen to another signifcantly impacts their productivity. No wonder then we are hearing so many stories about dramatic improvements in site success by improving load speeds – but how to measure web site speed accurately? At work and for desktop applications I have resorted to downloading a stopwatch onto my android phone which can be quite useful if there are consistent and substantial differences in speed – still however a somewhat blunt and inaccurate tool.
So the other day I was again investigating how to better improve the delivery of web sites through the installation of web sites using the new Google Progressive Web Application paradigm.
I discovered within Chrome there is an Audit feature beneath the inspection option.
To use this open the web page you are interested in measuring using Chrome ensuring that it is a standard Chrome window (and not the PWA window)
Right click and then go for inspect then select the Audits option as shown below.
At which point you should be presented with the following
Now hit the Run audits button at the bottom
We see the statistics in the top right. From my initial running of this on several of my sites the metrics on
Progressive Web App
Seems to be fairly consistent in ranking sites.
Performance seems to vary every time you run it even if you are on the same page and url.
Here for example is me running the same audit literally five minutes after the last picture.
So all in all definitely an improvement in metrics but with some of the metrics varying so much from run to run it may still be better for giving a general indication of performance overtime than anything else. I have just upgraded this site to WordPress 5.0.1 although the theme is still from 2010. It should be noted my MS Access applications still transfer between forms within fractions of a second, so fast in fact that I am unable to measure them. Websites are getting better and there are sites now that are very fast. Still some way to go though before they can beat the blistering speed of desktop.
I have started looking at new themes for the site but I find I like a lot about this theme and am having trouble finding anything I am quite as happy with.
A simple function that will loop through and create strings that add a number to a simple string. This string will then be used to create update queries. The same could be done in Excel or any other spreadsheet option but this stores the queries nicely and neatly into a table.
In this instance you want to have created a table called
Which has the following fields
Public Function CreateSimpleStrings()
Dim i As Integer
Dim rs As DAO.Recordset
Dim db As DAO.Database
Set db = CurrentDb
Set rs = db.OpenRecordset("T002ResidentialSearch")
For i = 2 To 100
rs!ResidentialString = "Erection of " & i & " houses"
rs!Houses = i
Here I am trying to automatically load log files into a MS Access file. Log files are actually txt files which need their extension changed if like me you wish to automate their import into MS Access. The following Visual Basic Script takes a file called FS_LS.log and changes it to FS_LS.txt. (I have another function that then takes this file and imports it into an MS Access application.
For safety this script creates a copy of the FS_LS log and renames it with a txt extension combining hours and minutes into the title. Thus it needs to be run one minute apart otherwise it will throw an error.
If you don’t have a feed to some kind of constant stream of log files this will copy the txt back to a log file ready to be copied again.(something I found useful for testing). Now next you want to call this from within your MS Access application prior to your data import function using the previously posted function.