Continuous Delivery Is a Journey – Part 2

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part 🙄)

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes 😉? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

Good developer experience also means simplified continuous deployment

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for 😉) and about some open questions or decisions what we encountered on our journey.

Every graphic is realized with plantuml thank you very much!

to be continued …

Continuous Delivery Is a Journey – Part 1

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Let me describe the context we were working:

  • Four business teams, 100% self-organized, owning 1…n Self-contained Systems, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.
  • Boundaries (as in Domain Driven Design) defined based on the business we were in.
  • Each team having full ownership and full accountability for their part of business (represented by the SCS).
  • Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.
  • Ensuring the code quality and the software quality is 100% team responsibility.
  • You build it, you run it.
  • One Platform-as-a-service team to enable this business teams to deliver features fast.
  • Gitlab as VS, Jenkins as build server, Nexus as package repository
  • Trunk-based development, no cherry picking, “roll fast forward” over roll back.
Teams
4 Business Teams + 1 Platform-as-a-Service Team = One Product

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

  • Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus
  • Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down

We had only a couple of exceptions for these rules. As an example: authentication doesn’t really make sense in asynchronous manner.

Working in self-organized, independent teams is a really cool thing. But

with great power there must also come great responsibility

Uncle Ben to his nephew

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us 🤔 )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.

to be continued …

Base your decisions on heuristics and not on gut feeling

As a developer we tackle very often problems which can be solved in various ways. It is ok not to know how to solve a problem. The real question is: how to decide which way to go 😯

In this situations often I rather have a feeling as a concrete logical reason for my decisions. This gut feelings are in most cases correct – but this fact doesn’t help me if I want to discuss it with others. It is not enough to KNOW something. If you are not a nerd from the 80’s (working alone in a den) it is crucial to be able to formulate and explain and share your thoughts leading to those decisions.

Finally I found a solution for this problem as I saw the session of Mathias Verraes about Design Heuristics held by the KanDDDinsky.

The biggest take away seems to be a no-brainer but it makes a huge difference: formulate and visualize your heuristics so that you can talk about concrete ideas instead of having to memorize everything what was said – or what you think it was said.

Using this methodology …

  • … unfounded opinions like “I think this is good and this is bad” won’t be discussed. The question is, why is something good or bad.
  • … loop backs to the same subjects are avoided (to something already discussed)
  • … the participants can see all criteria at once
  • … the participants can weight the heuristics and so to find the probably best solution

What is necessary for this method? Actually nothing but a whiteboard and/or some stickies. And maybe to take some time beforehand to list your design heuristics. These are mine (for now):

  • Is this a solution for my problem?
  • Do I have to build it or can I buy it?
  • Can it be rolled out without breaking neither my features as everything else out of my control?
  • Breaks any architecture rules, any clean code rules? Do I have a valid reason to break these rules?
  • Can lead to security leaks?
  • Is it over engineered?
  • Is it much to simple, does it feel like a short cut?
  • If it is a short cut, can be corrected in the near future without having to throw away everything? = Is my short cut implemented driving my code in the right direction, but in more shallow way?
  • Does this solution introduce a new stack = a new unknown complexity?
  • Is it fast enough (for now and the near future)?
  • … to be continued 🙂

The video for the talk can be found here. It was a workshop disguised as a talk (thanks again Mathias!!), we could have have continued for another hour if it weren’t for the cold beer waiting 🙂

My KanDDDinsky distilled

KanDDDinsky

The second edition of “KanDDDinsky – The art of business software” took place on the 18-19th October 2018. For me it was the best conference I have visited for long time: the talks I attended at this conference created all together a coherent picture and the speakers made me sometimes feel like visiting an Open Space, an UnConference. It felt like a great community event with the right amount of people with right amount of knowledge and enough time to have great discussions during the two days.

These are my takeaways and notes:

Michael Feathers “The Design of Names and Spaces” (Keynote)

  1. Do not be dogmatic, sometimes allow the ubiquitous language to drive you to the right data structure – but sometimes is better to take the decisions the other way around.
  2. Build robust systems, follow Postel’s Law

Be liberal in what you accept, and conservative in what you send.

If you ask me, this principle shouldn’t be only applied for software development…

Kenny Baas-Schwegler – Crunching ‘real-life stories’ with DDD Event Storming and combining it with BDD

I learned so much from Kenny that I had to write it in an separate blog post.

Update: the video of this talk can be seen here

Kevlin Henney – What Do You Mean?

This talk was extrem entertaining and informative, you should watch it after it will be published. Kevlin addressed so many thoughts around software development, is impossible to choose the one message.  And yes: the sentence  “It’s only semantics” still makes me angry!

Codified Knowledge
It is not semantics, it is meaning what we turn in code

Here is the video to watch.

Herendi Zsofia – Encouraging DDD Curiosity as a Product Owner

It was interesting to see a product owner talking about her efforts making the developers interested in the domain. It was somehow curious because we were on a DDD conference – I’m sure all present were already interested in building the right features fitting to the domain and to the problem – but of course we are only the minority among the coding people. She belongs to the clear minority of product owners being openly interested in DDD. Thank you!

Matthias Verraes – Design Heuristics

This session was so informative that I had to write a separate post about all the things I learned.

J. B. Rainsberger – Some Underrated Elements of Success for the Modern Programmer

J.B. is my oldest “twitter-pal” and in the past 5+ years we discussed about everything from tests to wine or how to find whipped cream in a Romanian shopping center. But: we never met in person 😥  I am really happy that Marco and Janek fixed this for me!

The talk was just like I expected: clear, accurate, very informative. Hier a small subset of the tips  shared by J.B.

Save energy not time!

There are talks which cannot be distilled. J. B.’s talk was exactly one of those. I really encourage everybody to invest the 60 minutes and watch it here.

Statistics #womenInTech

I had the feeling it were a lot of women at the conference even if they represented “only” 10% (20 from 200) of the participants. But still: 5-6 years ago I was mostly alone and it is not the case anymore. This is great, I really think that something had changed in the last few years!

Event Storming with Specifications by Example

Event Storming is a technique defined and refined by Alberto Brandolini (@ziobrando). I fully agree the statement about this method, Event Storming is for now “The smartest approach to collaboration beyond silo boundaries”

I don’t want to explain what Event Storming is, the concept is present in the IT world for a few years already and there are a lot of articles or videos explaining the basics. What I want to emphasize is WHY do we need to learn and apply this technique:

The knowledge of the product experts may differ from the assumption of the developers
KanDDDinsky 2018 – Kenny Baas-Schwegler

On the 18-19.10.2018 I had the opportunity to not only hear a great talk about Event Storming but also to be part of a 2 hours long hands-on session, all this powered by Kandddinsky (for me the best conference I visited this year) and by @kenny_baas (and @use case driven and @brunoboucard). In the last few years I participated on a few Event Storming sessions, mostly on community events, twice at cleverbridge but this time it was different. Maybe ES is like Unit Testing, you have to exercise and reflect about what went well and what must be improved. Anyway this time I learned and observed a few rules and principles new for me and their effects on the outcome. This is what I want to share here.

  1. You need a facilitator.
    Both ES sessions I was part at cleverbridge have ended with frustration. All participants were willing to try it out but we had nobody to keep the chaos under control. Because as Kenny said “There will be chaos, this is guaranteed.” But this is OK, we – devs, product owners, sales people, etc. – have to learn fast to understand each other without learning the job of the “other party” or writing a glossary (I tried that already and didn’t helped 😐 ). Also we need somebody being able to feel and steer the dynamics in the room.


    The tweets were written during a discussion about who could be a good facilitator. You can read the whole thread on Twitter if you like. Another good article summarizing the first impressions of @mathiasverraes as facilitator is this one.

  2. Explain AND visualize the rules beforehand.
    I skip for now the basics like the necessity of a very long free wall and that the events should visualize the business process evolving in time.
    This are the additional rules I learned in the hands-on session:

      1. no dev-talk! The developer is per se a species able to transform EVERYTHING in patterns and techniques and tables and columns and this ability is not helpful if one wants to know if we can solve a problem together. By using dev-speech the discussion will be driven to the technical “solvability” based on the current technical constraints like architecture. With ES we want to create or deepen our ubiquitous language , and this surely not includes the word “Message Bus”  😉
      2. Every discussion should happen on the board. There will be a lot of discussions and we tend to talk a lot about opinions and feelings. This won’t happen if we keep discussing about the business processes and events which are visualized in front of us – on the board.
      3. No discussions regarding persons not in the room. Discussing about what we think other people would mind are not productive and cannot lead to real results. Do not waste time with it, time is too short anyway.
      4. Open questions occurring during the storming should not be discussed (see the point above) but marked prominently with a red sticky. Do not waste time
      5. Do not discuss about everything, look for the money! The most important goal is to generate benefit and not to create the most beautiful design!

Tips for the Storming:

  • “one person, one sharpie, one set of stickies”: everybody has important things to say, nobody should stay away from the board and the discussions.
  • start with describing the business process, business rules, eventual consistent business decisions aka policies, other constraints you – or the product owner whom the business “belongs” – would like to model, and write the most important information somewhere visible for everybody.
  • explain how ES works: every business relevant event should be placed on a time line and should be formulated in the past tense. Business relevant is everything somebody (Kibana is not a person, sorry 😉 ) would like know about.
  • explain the rules and the legend (you need a color legend to be able to read the results later).
  • give the participants time (we had 15 minutes) to write every business event they think it is important to know about on orange stickies. Also write the business rules (the wide dark red ones) and the product decisions (the wide pink ones) on stickies and put them there where they are applied. The rules before the event, the policies after one event happened.
  • start putting the stickies on the wall, throw away the duplicates, discuss and maybe reformulate the rest. After you are done try to tell the story based on what you can read on the wand. After this read the stickies from the end to the start. With these methods you should be able to discover if you have gaps or used wrong assumptions by modelling the process you wanted to describe.
  • mark known processes (like “manual process”) with the same stickies as the policies and do not waste time discussing it further.
  • start to discuss the open questions. Almost always there are different ways to answer this questions and if you cannot decide in a few seconds than postpone it. But as default: decide to create the event and measure how often happens so that later on you can make the right decision!
    Event Storming – measure now, decide later


    Another good article for this topic is this one from @thinkb4coding

At this point we could have continued with the process to find aggregates and bounded contexts but we didn’t. Instead we switched the methodology to Specifications by Example – in my opinion a really good idea!

Specifications
Event Storming enhanced with Specifications by Example

We prioritized the rules and policies and for the most important ones we defined examples – just like we are doing it if we discuss a feature and try to find the algorithm.

Example: in our ticket reservation business we had a rule saying “no overbooking, one ticket per seat”. In order to find the algorithm we defined different examples:

  • 4 tickets should be reserved and there are 5 tickets left
  • 4 tickets should be reserved and there are 3 tickets left
  • 4 tickets should be reserved and all tickets are already reserved.

With this last step we can verify if our ideas and assumptions will work out and we can gain even more insights about the business rules and business policies we defined – and all this not as developer writing if-else blocks but together with the other stake holders. At the same time the non-techie people would understand in the future what impact these rules and decisions have on the product we build together. The side-effect having the specifications already defined is also a great benefit as these are the acceptance tests which will be built by the developer and read and used by the product owner.

More about the example and the results can you read on the blog of Kenny Baas-Schwegler.

I hope I covered everything and have succeeded to reproduce the most important learning of the 2 days ( I tend to oversee things thinking “it is obvious”). If not: feel free to ask, I will be happy to answer 🙂

Happy Storming!

Update: we had our first event storming and it was good!

Unfortunately we didn’t get to define the examples (not enough time). Most of the rules described above were accepted really well (explain the rules, create a legend for the stickies, flag everything out of scope as Open Question). Where I as facilitator need more training is by keeping the discussion ON the board and not beside. I have also a few new takeaways:

  • the PO describes his feature and gives answers, but he doesn’t write stickies. The main goal ist to share his vision. This means, he should test us if we understood the same vision. As bonus he should complete his understanding of the feature trough the questions which appear during the storming.
  • one color means one action/meaning. We had policies and processes on the same red stickies and this was misleading.
  • if you have a really complex domain
    (like e-commerce for SaaS products in our case) or really complex features start with one happy path example. Define this example and create the event “stream” with this example. At the end you should still add the other, not so happy-path examples.

Kollegen-Bashing – Überraschung, es hilft nicht!

Bei allen Konferenzen, die meinen Kollegen und ich besuchen, poppt früher oder später das Thema Team-Kultur auf, als Grund von vielen/allen Problemen. Wenn wir erzählen, wie wir arbeiten, landen wir unausweichlich bei der Aussage “eine selbstorganisierte crossfunktionale Organisation ohne einen Chef, der DAS Sagen hat, ist naiv und nicht realistisch“. “Ihr habt irgendwo sicher einen Chef, ihr wisst es nur nicht!” war eine der abgefahrensten Antworten, die wir unlängst gehört haben, nur weil der Gesprächspartner nicht in der Lage war, dieses Bild zu verarbeiten: 5 Selbstorganisierte Teams, ohne Chefs, ohne CTO, ohne Projektmanager, ohne irgendwelche von außen eingekippte Regeln und Anforderungen, ohne Deadlines denen wir widerspruchslos unterliegen würden. Dafür aber mit selbst auferlegten Deadlines, mit Budgets, mit Freiheiten und Verantwortung gleichermaßen.

Ich spreche jetzt hier nicht vom Gesetz und von der Papierform: natürlich haben wir in der Firma einen CTO, einen Head of Development, einen CFO, sie entscheiden nur nicht wann, was und wie wir etwas tun. Sie definieren die Rahmen, in der die Geschäftsleitung in das Produkt/Vorhaben investiert, aber den Rest tun wir: POs und Scrum Master und Entwickler, gemeinsam.

Wir arbeiten seit mehr als einem Jahr in dieser Konstellation und wir können noch 6 Monate Vorlaufzeit dazurechnen, bis wir in der Lage waren, dieses Projekt auf Basis von Conways-Law zu starten.

“Organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.” [Wikipedia]

In Umkehrschluss (und freie Übersetzung) heißt das “wie deine Organisation ist, so wird auch dein Produkt, dein Code strukturiert sein”. Wir haben also an unserer Organisation gearbeitet. Das Ziel war, ein verantwortungsvolles Team aufzubauen, das frei zum Träumen ist, um ein neues, großartiges Produkt zu bauen, ohne auferlegten Fesseln.

Wir haben jetzt dieses Team, wir leben jetzt diesen Traum – der natürlich auch Schatten hat, das Leben ist schließlich kein Ponyhof :). Der Unterschied ist: es sind unsere Probleme und wir drücken uns nicht davor, wir lösen sie zusammen.

Bevor ihr sagt “das ist ein Glücksfall, passiert normalerweise nicht” würde ich widersprechen. Bei uns ist es auch nicht nur einfach so passiert, wir haben (ungefähr 6 Monate) daran gearbeitet, und tun es weiterhin kontinuierlich. Der Clou, der Schlüssel zu dieser Organisation ist nämlich eine offene Feedback-Kultur.

Was soll das heißen, wie haben wir das bei uns erreicht?

  • Wir haben gelernt, Feedback zu geben und zu nehmen – ja, das ist nicht so einfach. Das sind die Regeln
    • Alle Aussagen sind Subjektiv: “Gestern als ich Review gemacht habe, habe ich das und das gesehen. Das finde ich aus folgenden Gründen nicht gut genug/gefährlich. Ich könnte mir vorstellen, dass so oder so es uns schneller zum Ziel bringen könnte.” Ihr merkt: niemals DU sagen, alles in Ich-Form, ohne vorgefertigten Meinungen oder Annahmen.
    • Alle Aussagen mit konkreten Beispielen. Aussagen mit “ich glaube, habe das Gefühl, etc.” sind Meinungen und keine Tatsachen. Man muss ein Beispiel finden sonst ist das Feedback nicht “zulässig”
    • Das Feedback wird immer konstruktiv formuliert. Es hilft nicht zu sagen, was schlecht ist, es ist viel wichtiger zu sagen woran man arbeiten sollte: “Ich weiß aus eigener Erfahrung, dass Pair-Programming in solchen Fällen sehr hilfreich ist” z.B.
    • Derjenige, die Feedback bekommt, muss es anhören ohne sich zu recht fertigen. Sie muss sich selber entscheiden, was sie mit dem Feedback macht. Jeder, der sich verbessern möchte, wird versuchen, dieses Feedback zu Herzen zu nehmen und an sich zu arbeiten. Das muss man nicht vorschreiben!
  • One-and-Ones: das sind Feedback-Runden zwischen 2 Personen in einem Team, am Anfang mit Scrum Master, solange die Leute sich an die Formulierung gewöhnt haben (wir haben am Anfang die ganze Idee ausgelacht) und später dann nur noch die Paare. Jedes mal nur in eine Richtung (nur der eine bekommt Feedback) und z.B. eine Woche später in die andere Richtung. Das Ergebnis ist, das wir inzwischen keine Termine mehr haben, wir machen das automatisch, jedes Mal, wenn etwas zu “Feedbacken” ist.
  • Team-Feedback: ist die letzte Stufe, läuft nach den gleichen regeln. Wird nicht nur zwischen Teams sondern auch zwischen Gruppen/Gilden gehalten, wie POs oder Architektur-Owner.

Das war’s. Ich habe seit über einem Jahr nicht mehr Sätze gehört, wie “die Teppen von dem anderen Team waren, die alles verbockt haben” oder “Die kriegen es ja sowieso nicht hin” oder “Was kümmert es mich, sie haben ja den Fehler eingecheckt” Und diese Arbeitsatmosphäre verleiht Flügel! (sorry für die copy-right-Verletzung 😉 )

10 Jahre Open Space – meine Retrospektive

Workshop-Tag:

Seit ein paar Jahren gibt es die Möglichkeit, den Open Space um ein Tag Workshop zu erweitern – wenn einem die zwei Tage Nerdtalk nicht reichen  😉

Ich habe mich diesmal für Tensorflow: Programming Neural Networks mit Sören Stelzer entschieden – und es war großartig. Obwohl ein sehr schwieriges Thema (das Wort Voodoo ist öfter gefallen), ich weiß jetzt genug über Machine Learning und Neuronale Netze, um mit dem Thema gut starten zu können. Ich formuliere es mal so: ich weiß jetzt, was ich weiß und vor allem, was ich nicht weiß und wie wir weiter machen müssen. Und mehr kann man von einem Workshop nicht erwarten. Zusätzlich finde ich, dass Sören eine sehr große Bereicherung für unsere Community ist, die sich genauso weiterentwickeln muss, wie die IT-Welt da draußen. Vielen Dank für dein Engagement!

Eigentlich ein fetten Dank an alle Trainer, die sich bei Community-Events engagieren!!

Erkenntnisse der nächsten 48 Stunden – geclustert:

Agile datengetriebene Entwicklung – war meine eigene Session (das heißt, ich habe das Thema vorgeschlagen, war Themen-Owner aber das war’s dann auch mit den Pflichten).

Ich wollte Tipps und Ideen dazu hören, wie man seine Arbeit nach scrum organisieren kann wenn man Themen beackert, wie Reporting, wo die Features auf große Menge Daten basieren. Es ist eine Sache, ein Testsetup für 2 möglichen Situationen zu schreiben und es ist eine ganz andere, die vielfalt der Situationen in Reporting zu beschreiben.

Take-aways:

  • wir werden damit leben müssen, dass unsere Features, Tests, Erwartungen eventual consistent sind  😀 Wichtig ist, dass wir Annahmen treffen, die wir für den Anfang als “die Wahrheit” betrachten.
  • User labs beauftragen.
  • Measurements weit vor ihre Auswertung einzubauen ist ok, bricht nicht mit dem Konzept “Jedes Feature muss Business Value haben” – auch wenn der echte Business Value erst in 2 Jahren auswertbar ist.
  • Aha-Effekt: In der Welt von Business Teams gibt es keine Fachabteilung. Ich bin in dem Reporting-Team ergo ich bin die Fachabteilung. (finde ich gut, häßliches Wort  😎 )

Stolperfallen mit React

  • unser Internationalisierungskonzept ist richtig (Texte aufteilen nach Modulen/Bereiche/o.ä., ein common Bereich, alles via API in den State laden)
  • Package-Empfehlung: react-intl
  • das Thema so früh, wie möglich berücksichtigen, später kann es richtig weh tun.
  • DevTool-Empfehlung: https://github.com/crysislinux/chrome-react-perf um die Performance der einzelnen React-Componenten zu sehen.
  • (es)Linting Empfehlung um zirkuläre Referenzen zu vermeiden:  “import/no-internal-modules” (Danke @kjiellski)

Wann kann Scrum funktionieren

  • wenn die Möglichkeit besteht, auf Feedback zu reagieren, sprich die Entwickler sind keine Resourcen sondern kreative Menschen.
  • das Team, in dem ich die Ehre habe, unser Produkt mitzugestallten, und @cleverbridge ist führend was agiles Arbeiten betrifft.

Menschen

  • man kann bei Trinkspielen mitmachen, ohne zu trinken
  • nachts träumen, dass der Partner einen enttäuscht hat und danach den ganzen Tag sauer auf ihn sein, ist eine Frauen-Sache (bestätigt von @AHirschmueller und @timur_zanagar) 😀

Nachtrag: fast vergessen, dass

  • wir dank @agross eine super wertvolle Session über dotfiles hatten
  • DDD wird gerade durch Zertifizierung kaputt gemacht, Serverless durch Hype
  • mit der Session von @a_mirmohammadi über/zu den Anonymen Abnehmer ist der @devopenspace eindeutig in die Kategorie “es gibt nichts, was nicht geht” angekommen

Der Open Space muss eine UNkonferenz bleiben

sonst ist es kein Open Space mehr…

 

Am Wochenende gab es den letzten Pflichttermin des Jahres, der 8. Developer Open Space im Leipzig. Es wurden schon wieder alle Rekorde gebrochen – so viele Workshops (20), so viele Teilnehmer (um die 240) und – zu meiner besonderen Freude – so viele Frauen (kenne die genaue Zahlen nicht, aber im zweistelligen Bereich), wie noch nie. Danke nochmal an @TorstenWeber für die großartige Arbeit.

Für “Kenner” – alte Open Space-Hasen – war er lehrreich und spannend, wie immer. Wie sollte es sonst sein, wenn über 200 Nerds aufeinander treffen und ihre Erfahrungen austauschen. Diesmal haben sogar die üblichen 14 Stunden Session-Zeit + gemeinsames Frühstück + gemeinsame Abendveranstaltung nicht gereicht

Es war großartig, wie immer, aber wie gesagt: für Kenner. Ich habe nicht mal den Flughafen erreicht, als eine Diskussion über das immer zahlreicheren “Folien-Sessions” – Präsentationen – entbrannt ist.

 


Die Wahrheit ist, mir ist das früher auch nicht aufgefallen, aber das lag an meiner Erfahrung, wie man richtig priorisiert. Nach so vielen Jahren Community-“Mitgliedschaft” weiß ich genau, welche Sessions wert sind, in der begrenzten Zeit, die man bei einem Open Space hat, besucht zu werden. Deshalb ignoriere ich grundsätzlich Sessions die “Ich zeige euch meine Präsentation über das geilste, hippste, usw. Framework und wie ihr alle damit das geilste, hippste, usw. Webseiten bauen könnt”.

Und da ist das Problem: nur die erfahrenen Leute wissen das, die schon immer dabei waren, auch in den Zeiten, wo das die Ausnahme war. Die Beschreibung “Unkonferenz” ist nicht von ungefähr, ein Open Space ist keine Konferenz, wo man sich hinsetzt und konsumiert! Es lebt von den Teilnehmern, nicht von den Sponsoren oder von den Speakern.

Es gab schon immer One-Man-Shows und ich persönlich habe unglaublich viel davon profitiert, ich kann und will es nicht leugnen. Aber die waren NIEMALS Verkaufsgespräche sondern einfach nur Beweise dafür, wie großartig es ist, zu einer Community zu gehören. Ich habe manche von diesen Freunden – ich glaube, ich kann euch inzwischen Freunde nennen – mal gefragt, warum sie das tun, warum sie ihre wertvolle Zeit in uns noobes investieren und die Antwort war “darum, dass ihr das dann genau so weiter macht”. Und das ist genau das, was ich tue: ich möchte was der Community zurückgeben und meine Zeit in die Zukunft, in unseren Nachfolgern investieren: anderen zeigen, wie genial ein Community Event ist, wie großartig die Leute sind, die die Community bilden.

Ergo tue ich mich sehr schwer mit Versuchen, diese Events als “Projektmarkt” zu misbrauchen. (Ein hartes Wort, ich weiß, aber ihr wisst, ich sage, was ich denke 😉 )

Also hier mein Aufruf: Besucht weiterhin die Open Spaces, meidet aber Sessions, die “ich zeige euch, wie es geht” lauten, es sei denn, ihr habt selber darum gebeten. Lasst euch nicht die Zeit klauen, um euch von einer einziger Person berieseln zu lassen, wenn ihr in dieser Zeit einen echten Erfahrungsaustausch mit ehrlichen Berichten, ohne Powerpoint-Folien, erleben könntet. Alle haben was zu bieten, das beweist schon die Tatsache, dass ihr den Weg zum Open Space gefunden habt! Und noch was: stellt ruhig alle Fragen, die ihr habt, weil eins ist sicher: Hier gibt es keine Rollenaufteilung in Sprecher / Zuhörer, Entwickler / Administrator, Softwareentwickler / Projektmanager usw. und die Themen finden sich vor Ort ganz von selbst. ( http://nossued.de/ ).

Der nächste Open Space ist die Spartakiade und danach – wahrscheinlich 😉 – der Shorty Open Space (zu finden und anmelden via Twitter) oder doch der OPEN SPACE SÜD (im Juni oder Juli in Karlsruhe) – und danach natürlich der 9. Open Space Leipzig. Lass uns also eine neue Regel etablieren: Folienverbot!

Graph databases

My second day at the Spartakiade was dedicated to the subject of graph databases.

In computing, a graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A graph database is any storage system that provides index-free adjacency. This means that every element contains a direct pointer to its adjacent elements and no index lookups are necessary. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases. (source: Wikipedia)

The workshop was led by Stephan (@Piratevsninja) – thank you very much! – and we used Neo4j, the most popular open source graph database. After this day of dive-in I can say I can start to create my first graph database application without asking myself all the time what the hell am I doing 🙂

Also: what is a graph database?

On a very high level we can split the databases into two types: RDBMS and NoSQL. In other words: into relational and nonrelational storages.

NoSQL databases – called by Martin Fowler Not Only SQL have the main focus on the data model and not on the relations between the data. Mostly there isn’t any relation between the entities. They can be differentiated based on the data model they use. Here some examples: Key-value storages (Redis, CouchDB, etc.), Document DBs (Lotus Notes, MongoDB, etc.), Column based DBs (Cassandra, HBase, etc.).

Relational databases (RDBMS) store the data denormalized and define the relations between the data types (also ALL the entries of one type). I don’t think I have to give examples for our plain old databases: if you can join and distinct data you are in the world of relational databases.

Graph databases combine both worlds: they are relational databases with the main focus on the relations between the data (not between the data model) – or like Stephan formulated it: they put data in the context of relationships.

Nodes and relations
Emil knows Ian (source: neo4j.com)

How you define the content ?

A graph database contains nodes (instances like “Emil” and “Ian”) and relations between these nodes (“knows”). One node is defined through some properties and can be grouped through labels. They often have aliases to be easier to work with them:

Emil:Person {name:"Emil", age:"20"}, Ian:Person {name:"Ian"}

One relation is defined through a name, the nodes it connects and the direction of this connection. Relations can also have properties but they should be very carefully chosen. They must describe the relation and not the nodes.

(Emil)-[:KNOWS {certainty:100}]->(Ian)

Now is clear to see what is the difference between a “plain” relational and a graph database: for the former you care always about the data. For the latter the data means nothing without the relation to some other data.

Movies and actors

 

Fine, I can set actors in relations. So what?

The most important point is: think around a corner. The fact that I can report that Ian knows Emil and Johann knows Emil too can be interesting but I don’t think there are any new business ideas in the domain of social connections which weren’t evaluated yet. What about the information that only 20% of the Swedish tourists who visit Germany and are between 18 and 25 do not speak German? This is surely a VERY interesting to know if you sell German dictionaries in the near of Universities…
I just invented this idea – I have no idea how many Swedish guys between 18 and 25 are speaking German 😉 – but this is what I mean with think around a corner!

What else remains to do?

After giving a good thought to the design: the relations and the connected data – like ids and oder characteristics but only if they are must-have – there are only a few things to do. Neo4j just like all the other graph databases have some kind of API to create, insert, update and query data. You only have to save the data across your application and create a UI (or use the one from Neo4j which is one of the coolest UI I ever saw) to create reports. Put this reports in front of the business analyst and you are done!