Mapping an Pivot Query in EntityFramework6

At work I had a challenging problem in the last days (in fact I like challenging problems and they’re not that rare which is why I like my job).

This time the problem was this:

  1. Let T be a table with columns C and X.
  2. Let the column E be of an enum type with a short, static defined values E1, E2, E3.
  3. Let the column X be a space separated list of tokens (values look like „ABC“, „ABC DEF“, „XUT OSH OJR“, with an arbitrary number of tokens).
  4. We look for the following information:
    For a given token x in the values of X get a tuple R = (#(E1), #(E2), #(E3)), where #(Ei) denotes the number of rows in T, where x is included in X and the row has value Ei in column E.

Mapping an Pivot Query in EntityFramework6 weiterlesen

Strange behaviour of String Interpolation Refactorings in VisualStudio 2015/Roslyn/Resharper

We switched to VisualStudio 2015 at work a few days ago, using Resharper inside as before. VisualStudio has some great new features in this version, a few bugs, and some strange behaviours I don’t understand.

In some articles I’m going to talk about some strange refactoring suggestions I discovered, starting with string interpolations here.

Just to ask beforehand: Is there any way to identify where a refactoring comes from (is it a suggestion and/or implementation of Roslyn or Resharper?) Strange behaviour of String Interpolation Refactorings in VisualStudio 2015/Roslyn/Resharper weiterlesen

IEnumerable, ICollection, IReadOnlyCollection – an API analysis on .NET and XUnit

Today for the first time since I use XUnit I wrote a theory where the individual test parameters could not be easily defined as compile time constants.

In the next chapter I’m going to explain the motivation and the use case that arose here, before the next chapter will try to give a summary over the .NET API that’s behind that issue and what’s wong with it. IEnumerable, ICollection, IReadOnlyCollection – an API analysis on .NET and XUnit weiterlesen

Ozcode upcoming feature tweet: serializing objects for debugging

Currently oz_code asks about what developers want to see in future versions of their Visual Studio extensions to simplify and enhance debugging experience.

We use oz_code at work for complex .NET/WPF desktop applications and it is an invaluable tool for various use cases while debugging.

Today I read this tweet from @oz_code:

Would you like to be able to serialize objects from your debugging session and then use those objects in a Unit Test? (feature planning)

And I didn’t have to think twice about it: yes! But it didn’t take a minute thinking about what would be possible – and was a developer myself: What might be challenges of such a feature (even though I don’t have to implement it myself).

Ozcode upcoming feature tweet: serializing objects for debugging weiterlesen

Ein Demo-Beitrag für Mareike

Hallo Mareike,

hier siehst du, wie ein Beitrag aussehen kann, der einen Audioclip enthält.

Zu Testzwecken hab ich mal das Lied „Little Sister“ von „Waterfalls“ genutzt, das auf Jamendo unter der Lizenz CC-BY-NC-ND veröffentlicht ist.

Der erste Player ist direkt aus dem Podcast, das ist ein Podcast-Plugin, das dafür sorgt, dass der Blog auch als Podcast abonniert werden kann (dann kriegen deine Hörer das jeden Tag neu aufs Handy geliefert).

Der zweite Player, der nun folgt, ist eine alternative, dann ist das Stück nur im Artikel zu sehen, lässt sich aber eben nicht als Podcast abspielen:

Der Adventskalender ist aus rechtlichen Gründen nur für angemeldete Benutzer verfügbar

Das kannst du kombinieren mit geplanter Veröffentlichung, so dass Du den Beitrag vom Türchen 5 eben am 5.12. veröffentlichst und so weiter. Gäste können kommentieren, verlinken etc.

Als Podcast (erste Variante) sollte sich da ganze dann auch in Reihenfolge problemlos hören lassen.

Test database instances

OSM is a big database. Postgresql/Postgis is an excellent database management system, but for local tests on my notebook I have a problem: I have only one slow harddisk that is shared by postgresql tablespace, tomcat, linux swap partition and firefox.

Even with 4 GB RAM that leads to a problem, if the database tables become too big to fit into memory. Especially for the mapnik database schema some actions require full table scans on big tables (that’s one reason why I would prefer a different database schema).

A worldwide osm database is to big for my harddisk (or at least I don’t want to give that much disk space for testing currently here), so I first tried a germany extract database (built out of the Geofabrik’s pbf Germany extract), but even that’s too big for fast processing as at least one query of the exploration page needs a full sequential table scan through the ways table. As a result the calculation of any exploration page takes several minutes, which isn’t acceptable for debugging (as it would not be acceptable for a live environment, of course).

After all I had to restrict myself to an even smaller part of the world.

To apply that I first thought about cutting a boundingbox around Paderborn by osm2pgsql directly:

osm2pgsql -s -C 768 -c -p paderborn -d lalm -U lalm -W -H 127.0.0.1 -P 5432 
          -b 8.6,51.65,8.9,51.8 --hstore --style ./default.style germany.osm.pbf

The problem here is, that osm2pgsql ignores the bounding box (parameter -b) to write the slim tables. These are written containing every node of the import file, which takes quite long and uses lot’s of disk space.

The second attempt is better: I first used osmosis to get the bounding box extract and then imported the whole resulting file using osm2pgsql. The corresponding osmosis command is

osmosis --read-pbf file="germany.osm.pbf" 
        --bounding-box top=51.8 left=8.6 bottom=51.65 right=8.9 
        --write-pbf file="paderborn.osm.pbf" compress=none

I decided not to compress the paderborn.osm.pbf file as that speeds up reading the file for osm2pgsql afterwards. The osm2pgsql import then is again the same as above except the -b parameter as now the bbox corresponds to the whole file already.