expose postgres to outer world

fire wall

Whatever you need for firewall...

pg_hba.conf

host    all             all             XXX.XXX.XXX.XXX/0           md5

postgresql.conf

listen_addresses = 'localhost,XXX.XXX.XXX.XXX'

import osmnames to postgres

download links & more info on osmnames: https://github.com/osmnames/osmnames

Data import:

create table if not exists geonames (
	name text,
	alternative_names text,
	osm_type text,
	osm_id bigint,
	class text,
	type text,
	lon numeric,
	lat numeric,
	place_rank integer,
	importance numeric,
	street text,
	city text,
	county text,
	state text,
	country text,
	country_code text,
	display_name text,
	west numeric,
	south numeric,
	east numeric,
	north numeric,
	wikidata text,
	wikipedia text,
	housenumbers text
);
delete from geonames;

create table if not exists housenumbers (
	osm_id bigint,
	street_id bigint,
	street text,
	housenumber text,
	lon numeric,
	lat numeric
);
delete from housenumbers;

COPY geonames FROM 'S:/ome/location/planet-latest_geonames.tsv' DELIMITER E'\t' CSV HEADER;
COPY housenumbers FROM 'S:/ome/location/planet-latest_housenumbers.tsv' DELIMITER E'\t' CSV HEADER;

Note that with v2 of osmnames there may be duplicates (https://github.com/OSMNames/OSMNames/issues/162)

therefore it is worth cleaning it up:

select distinct on (osm_id) * into geonames_clean from geonames;
select distinct on (osm_id) * into housenumbers_clean from housenumbers; 

Add some pks & indexes, so we can speed stuff up a bit:

ALTER TABLE public.geonames_clean
    ADD PRIMARY KEY (osm_id);
ALTER TABLE public.housenumbers_clean
    ADD PRIMARY KEY (osm_id);
CREATE INDEX geonames_clean_country_code_idx
    ON public.geonames_clean USING btree
    (country_code COLLATE pg_catalog."default");

Data extract for a country & glue house numbers:

select
	gn.*,
	hn.osm_id as hn_osm_id,
	hn.street_id as hn_street_id,
	hn.street as hn_street,
	hn.housenumber as hn_housenumber,
	hn.lon as hn_lon,
	hn.lat as hn_lat
into
	nl_geonames_complete
from
	geonames_clean gn 
	left outer join housenumbers_clean hn on gn.osm_id = hn.street_id
where
	gn.country_code = 'nl';

Finally output the extract to a tab separated file:

copy (select * from nl_geonames_complete) to 'S:/ome/location/nl_geonames_complete.tsv' with CSV HEADER DELIMITER E'\t';

Generate JSONIX mappings for inspire extended WMS capabilities

Even though JSONIX is a well recognized tool, I have discovered it recently, while looking into parsing WFS capabilities document in JS.

Once I had WFS operational, I started looking into WMS Caps parser, so can handle querying OGC services in a unified way regardless the mapping library (ol, leaflet).

It is pretty straight forward, all mappings are here: w3-schemas and ogc-schemas. Some problems will pop-out when a service happens to provide inspire extended capabilities. Using only the standard mappings will not do, jsonix will complain about missing mappings for {http://inspire.ec.europa.eu/schemas/inspire_vs/1.0}ExtendedCapabilities.

In order to get things right, it was required to provided the proper mappings. Luckily schemas are accessible via inspire-schemas one just needs to generate the mappings, as the repo does not provide them. Mapping generator can be obtained from https://github.com/highsource/jsonix-schema-compiler, then it's just a matter of executing the following:

java -jar jsonix-schema-compiler-full.jar -p INSPIRE_VS_1_0 -b inspire-schemas-1.0.0\schemas\src\main\resources\inspire.ec.europa.eu\schemas\inspire_vs\1.0\bindings.xjb -d mappings inspire-schemas-1.0.0\schemas\src\main\resources\inspire.ec.europa.eu\schemas\inspire_vs\1.0\inspire_vs.xsd

If you happen to get the following err:

[ERROR] Property "Title" is already defined. Use <jaxb:property> to resolve this conflict.
  line 232 of http://www.w3.org/1999/xlink.xsd

[ERROR] The following location is relevant to the above error
  line 219 of http://www.w3.org/1999/xlink.xsd

[ERROR] Property "Title" is already defined. Use <jaxb:property> to resolve this conflict.
  line 261 of http://www.w3.org/1999/xlink.xsd

[ERROR] The following location is relevant to the above error
  line 246 of http://www.w3.org/1999/xlink.xsd

[ERROR] org.hisrc.jsonix.execution.JsonixInvoker - The model is null, there was probably a problem parsing schemas.

Make sure to include binding for xlink:title

<?xml version="1.0" encoding="UTF-8"?>
<jxb:bindings version="1.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema">
	<jxb:bindings schemaLocation="../../common/1.0/common.xsd" node="//xs:complexType[@name='service']">
		<jxb:class name="InspireService"/>
	</jxb:bindings>
	<jxb:bindings schemaLocation="../../common/1.0/common.xsd" node="//xs:complexType[@name='keyword']">
		<jxb:class name="InspireKeyword"/>
	</jxb:bindings>
	<jxb:bindings schemaLocation="inspire_vs.xsd" node="//xs:element[@name='ExtendedCapabilities']">
		<jxb:class name="InspireExtendedCapabilities"/>
	</jxb:bindings>
	<jxb:bindings scd="x-schema::xlink" xmlns:xlink="http://www.w3.org/1999/xlink">
		<jxb:bindings scd="/group::xlink:locatorModel/model::sequence/xlink:title">
			<jxb:property name="LocatorTitle"/>
		</jxb:bindings>
	</jxb:bindings>

</jxb:bindings>

asp.net core custom model validation output

Starting at asp.net core 2.something, a unified model binding err is return, when input does not bind to an expected model. Api returns 400 with a msg similar to:

{
    "someProperty": [
        "Some error."
    ]
}

There are scenarios however when one may want to return a customised error msg that is used across an application. In order to do this, the following should be done:

Disable the default model validation:

public void ConfigureServices(IServiceCollection services)
{
      //...

      //this should disable the default model validation, so custom attribute can be used instead
      services.Configure<ApiBehaviorOptions>(opts =>
      {
          opts.SuppressModelStateInvalidFilter = true;
      });

      //...
}


Create a custom action filter:

public class ValidateModelFilterAttribute: ActionFilterAttribute
{
    /// 
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        if (!context.ModelState.IsValid)
        {
            var errs = context.ModelState.Keys.Select(
                key => context.ModelState[key].Errors.Select(e =>
                    $"{key}: {(!string.IsNullOrWhiteSpace(e.ErrorMessage) ? e.ErrorMessage : e.Exception.Message)}")
            ).SelectMany(x => x).ToArray();

            context.Result = new ObjectResult(new {Errors: errs})
            {
                StatusCode = (int) HttpStatusCode.BadRequest
            };
        }
    }
}


Decorate controller or method with the custom filter or set it to be used globally.


More info here: https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/filters?view=aspnetcore-2.2

pgsql - update jsonb with behavior similar to monogdb $inc operator

I was looking into upserting data into pgsql jsonb field but with a behavior similar to mongodb $inc operator.

upsert itself was solved by using:

insert ...  
on conflict(cols) do update set ...


this snippet is just a dirty reminder on how jsonb can be updated with new keys:

select
  d || jsonb_build_object(
    'x', coalesce(cast(d->>'x' as int), 0) + 40,
    'y', coalesce(cast(d->>'y' as int), 0) + 40
  )
from (
  select '{"x":5000, "z":100}'::jsonb as d
) as data

And a more complete example:

CREATE TABLE monitoring.usage_stats
(
  id serial NOT NULL,
  date timestamp without time zone NOT NULL,
  application_id uuid NOT NULL,
  organization_id uuid NOT NULL,
  user_id uuid,
  token_id uuid,
  referrer text,
  id_1 text,
  id_2 text,
  id_3 text,
  id_4 text,
  id_5 text,
  counters jsonb NOT NULL DEFAULT '{}'::jsonb,
  CONSTRAINT "PK_monitoring.usage_stats" PRIMARY KEY (id)
);

CREATE UNIQUE INDEX usage_stats_uq_usage_stats_composite_key
  ON monitoring.usage_stats
  USING btree
  (date, application_id, organization_id, user_id, token_id, referrer COLLATE pg_catalog."default", id_1 COLLATE pg_catalog."default", id_2 COLLATE pg_catalog."default", id_3 COLLATE pg_catalog."default", id_4 COLLATE pg_catalog."default", id_5 COLLATE pg_catalog."default");
INSERT INTO monitoring.usage_stats (
    date,
    application_id,
    organization_id,
    user_id,
    token_id,
    referrer,
    id_1,
    id_2,
    id_3,
    id_4,
    id_5,
    counters
)
VALUES (
    @date,
    @applicationId,
    @organizationId,
    @userId,
    @tokenId,
    @referrer,
    @id1,
    @id2,
    @id3,
    @id4,
    @id5,
    @counters::jsonb
)
ON CONFLICT (date,application_id,organization_id,user_id,token_id,referrer,id_1,id_2,id_3,id_4,id_5)
DO UPDATE SET
    counters = usage_stats.counters || jsonb_build_object('test',coalesce(cast(usage_stats.counters->>'test' as int), 0) + 2);