I don't quite buy the "micro-" or even "nano-" services trend. The service shouldn't be as small as possible, it should encapsulate business component, typically bounded context. If it's just one bounded context, it's a microservice. Everything smaller is just not practical from performance perspective (too chatty protocol, too big latency). I also don't think a team of one developer is enough. Good team has 5 people +/- 2, including QA, PO, etc. But in order to make any pair-programming, code review, improve bus factor - you need more than one developer. I agree with the rest of this slide and the whole talk was really cool. I especially agree with publishing interesting stuff*. Even without microservices we knew that centralized shared databases are bad. Event store keeping long history of business events (e.g. Kafka) can often solve the problem of sharing data between services. Interestingly it allows us to start new projects that need lots (all?) historical data from other projects without costly data migration and batch jobs. Simply grab historical events, as deep as you want. Word of caution: make sure your distributed event store doesn't become another centralized "database". Otherwise you'll wake up one day with a system publishing events in a slightly different format than yesterday, breaking too fragile consumer.
Summary principles of MicroServices
- Very, very small
- Team size of one to develop/maintain
- Loosely coupled (including flow)
- Multiple versions acceptable (encouraged?)
- Self-monitoring of each service
- Publish interesting "stuff" (w/o explicit requirements)
- "Application" seems to be a poor conceptualization
I wholeheartedly agree with Fred that asynchronous communication should be preferred. Just like with publishing events, communicating via message passing allows better decoupling and resiliency. There is a place for blocking communication: when handling online requests or when data is needed now or never. But focusing on message passing will pay off.
the slides, I'm not even trying to summarize what I learned, but just to give you a quick overview what to look for. The core idea behind bitcoin protocol is blockchain - an immutable, global, append only list of transactions. Nodes in bitcoin network can append to that list, confirming transactions.
During this session I learned: how transactions get accepted and why it might takes several minutes, why mining used to be profitable and how miners can now profit from transaction fees, how was the algorithm design to control mining speed despite unknown number of participating nodes, and even what are the most popular attacks and they are circumvented. Fascinating talk, Jan really knows what he is talking about.
I would also like to mention "The Evolution of Hadoop at Spotify - Through Failures and Pain" by Josh Baer and Rafal Wojdyla was quite entertaining. These guys built an impressive Hadoop cluster and shared their knowledge and experience, for example how to monitor jobs developed by wide range of developers across the company. However I am a bit concerned about the "future" of Map-Reduce concept. Google published the idea in 2004 and abandoned this concept in 2014 - are we betting the right horse with Hadoop? Another talk I want link is "Developing Event-driven Microservices with Event Sourcing & CQRS" by Chris Richardson - I couldn't attend that one, but slides look very promising.
Overall I enjoyed GOTO Amsterdam conference, big thanks to 4Finance IT for sponsoring the trip.
Tags: 4FinanceIT, bitcoin, conferences, microservices, review