Ozcode upcoming feature tweet: serializing objects for debugging

Currently oz_code asks about what developers want to see in future versions of their Visual Studio extensions to simplify and enhance debugging experience.

We use oz_code at work for complex .NET/WPF desktop applications and it is an invaluable tool for various use cases while debugging.

Today I read this tweet from @oz_code:

Would you like to be able to serialize objects from your debugging session and then use those objects in a Unit Test? (feature planning)

And I didn’t have to think twice about it: yes! But it didn’t take a minute thinking about what would be possible – and was a developer myself: What might be challenges of such a feature (even though I don’t have to implement it myself).

So I figured out one major concern about how that could work – if at all:

sounds great, and I’m curious already how to define boundaries of object graphs after you have that feature – tldr: yes please!

A few hours later @oz_code asked back:

Well, we were thinking of there being simply a numeric ‚depth‘ property, do you envision something more involved than that?

It seems, we have a different use case in mind here, so let me explain a little bit further here – where I’m not restricted to 140 characters per post.

My use case of copying a runtime object model to a unit test

At work I’m dealing with data models that consist of several thousand objects about 100-150 entity classes and object numbers in the order of magnitude of 100.000 or something like that – roughly estimated.

Writing complete unit tests is always a challenge, and I am not yet at a stage where I would be near to that goal, but where to add new unit tests? If I would have really tried to be complete I know there are always issues missed out, not be covered. If that wouldn’t be, nobody would have bugs in their software at all, right? Those bugs that have been missed before occur at runtime. Sometimes they even occur at early stages while debugging, or you want to figure out why a particular exception is thrown at a specific stage.

To investigate those exceptions you try to reproduce them, and often the easiest way to reproduce it is to repeat the same stuff again using your software UI or API again. Combined with breakpoints and stepping – where ozcode is a great tool, especially since version 2.0 – this usually allows to track down the issue to some lines of code where you’re quite sure the model is consistent before, but inconsistent afterwards.

The challenge to create a minimal working example

The obvious recipe now would be to create a unit test testing those few lines with the failing (and some other) input. When you report bugs to other developer teams they usually ask for a „minimal working example“, and they do so for a good reason: it’s hard to build a complex model by hand keeping it in sync with what you have found to produce the error.

That’s what immediately came to my mind when I read the tweet mentioned above: What if I could create copy the model from the debugging session, stop debugging, create a unit test, past that model and add the few lines of code followed by some assertions? It would be awesome.

What I was curious about?

Back to the Twitter conversation and my tweet:

sounds great, and I’m curious already how to define boundaries of object graphs after you have that feature – tldr: yes please!

The idea of oz_code obviously was to cut at a certain depth of the model, but what if you have, say, a linked list? it wouldn’t be very complex, but the use case to reproduce an issue would be gone here for lists longer than the maximum depths.

You may increase that variable, one might say, to include all elements of that list, but is one single depth value a reasonable heuristic to cut the model? I don’t think so.

Let a model be a linked list of objects with a big graph of „side data“ at each element. „Side data“ in this context is data that is not required for the current problem, e.g. the current method executed or to test. To reproduce the error we want to have a „minimal working example“, so it might have to include the linked list, but nothing else. A single depth couldn’t differentiate between those.

Instead – as oz_code didn’t yet it seems – I can imagine different ways to cut the graph to export:

  1. Set of included/excluded types: In the example above the metadata is very likely to have different types than the data to test itself. So it could be possible to define a set of types to either „include only types of set A“ or „include everything, but break before objects, whose type is in set B“.
    Of course value type properites should be included by default, not sure about strings, but it should be explicitly easy to include string.
  2. Cut manually at the object inspection view in debugging: ozcodes reveal feature comes to mind. This feature allows to mark properties of types to be revealed to see them earlier and easier. A similar function could be used to „exclude from here and below“.
  3. Range of Lists and Arrays: For arrays and lists it could be possible to define a range to include.

I didn’t validate it further, but I think it may be incredible powerful to just define the cut boundaries with these options. Allow to combine it with the „maximum depth“ approach ozcode thought about already and with each other.

What would be possible in addition to that?

Accidently this would provide a great way to create additional (regression) test cases: Use your software where you know it’s working correctly, break before a certain function, copy the model and use it as a test input for regression tests.


I know these are a lot of ideas. Of course some of them might be out of scope or too hard to implement. On the other side some ideas seem to fit in the functionality already provided by ozCode now and it might be possible to re-use existing solutions to support it.

And even if nobody cared about my thoughts here – at least I started blogging again after months of being inactive.

5 Gedanken zu „Ozcode upcoming feature tweet: serializing objects for debugging

  1. I’m the CTO and co-founder of OzCode. First and foremost, thank you so much for taking the time to write down your *extremely* insightful and interesting comments! It is sincerely a true pleasure to have you brainstorming with us!

    I think you hit the nail on its head with your point regarding the case of a linked-list – that’s indeed one case where the depth-based approach would totally break down.

    I think there are two most common scenarios: one where you just want as full a representation of the original object as possible in your Unit Test, and want a simple way of getting it (for which, we were thinking you could use OzCode’s JSON representation that will be in exactly the format that Json.NET, the popular Json library, making it easy to deserialize in your unit test, and very nicely humanly readable. In this case, you would probably want the JSON output in a separate file, which your unit test would then load and deserialize, but of course that’s entirely up to you!

    The other scenario, is where you would want much more fine-grained control of the object creation, because, as you’ve said, you want the *minimal* working example. In this case, you would probably want OzCode to produce either C# code that creates the object “new Customer { Name = “Tom”, …. }”, or perhaps even (as others have requested) C# code for the set-up phase of a Mock, tailor made for your favorite mocking framework, to copy paste directly into the test. In this case, all the ideas you mentioned make perfect sense to us, and we will certainly think through them and see where we can support this in a future version.

    Again, thank you *so* much for sharing your thoughts, it is very very highly appreciated, and I’m touched by the fact that you took the time to write it!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.