All of this comes from reading everything in Martin Fowler's https://www.martinfowler.com/microservices/, and many of it's linked articles.
Microservice architectural style: building applications as suites of services
Amazon has the idea of a Two Pizza Team - a team should be feedable by two pizzas
Smart Endpoints Dumb Pipes - There is no smart choreography of messages or central busses, just straight service calls over <http or something similarly simple. At most, there are message queue services (e.g. rabbitmq).
Build a monolith first to explore the domain, then extract microservices. Unfortunately there's no great way to build a microservice-ready monolith, changing is gonna be hard no matter what. Might be best to build internal components with strong boundaries, but be aware that those boundaries will change as you understand the problem better.
Don't start with a monolith
Separating a system after the fact is just to damn hard. Start the initial design with subsystems.
Microservices and the First Law of Distributed Objects
First Law of Distributed Object Design: "don't distribute your objects"
- Local calls are fine-grained. Can do them individually. Are sure they'll return.
- Remote calls are coarse-grained. Better to batch. May fail do to network errors.
Learn from SOA: 5 lessons for the microservices era
SOA originally was an idea that you need to connect applications over the internet, preferably with a bus, using SOAP. It eventually became just another layer of complexity, without adding enough value, especially compared to the lightweight REST.
(Interesting note: Facebook and Etsy are monoliths.)
- Microservices quickly form strong boundaries, probably along team lines. It's hard to avoid.
- Deployment of individual components is easier and safer.
- Easy to use the best tool (programming language) for the job.
- Distributed computing is hard.
- Distributed databases is hard.
- Continous deployment is complex.
Don't use microservices, unless your system is too complicated for a monolith. There's too much overhead in dealing with distrbuted systems and complex deployment.
How Big Should A Microservice Be?
A microservice is just a layer of abstraction, like an object. The abstraction should be able to fit in your head. Single responsibility principle.
- Object layer - each object models a single small thing
- Domain layer - each domain or namespace combines objects into logical collections. Think vertical slices, not groups like "controllers" and "models".
- Application layer - this is your microservice. You combine different domains into a cohesive application.
Microservices requires a new set of tools that traditional companies may not have. This is a little bit of DevOps.
For even a few microservices you need:
- Automatically build a new cloud server - don't hold up teams by having servers be manually built.
- Centralized Monitoring - things are going to go wrong in new ways, you'll need monitoring to understand it all.
- Automatic deployment - teams need to deploy at their own timeline.
For a lot of microservices, you'll need:
- Continuous delivery - How is this different from Automatic deployment? I think it's just all that plus testing.
- Product oriented teams - So each team owns their business
Microservices really needs a DevOps culture. Agile has already broken down the silos between different aspects of designing and building software, DevOps is needed to break down the barries between development and operations.
Use versioning only as a last resort
Some people, when confronted with a problem, think "I know, I'll use versioning." Now they have 2.1.0 problems.
Versioning services sucks. Tread carefully.
Self Testing Code
A core component of microservices is Continuous Integration, and a core component of CI is self-testing code.
- Allows you to be confident that the current code is not broken.
- Allows you to be confident to make changes.
Testing Strategies in a Microservice Architecture
- Unit Tests test fine grained, internal logic.
- Integration Tests test the the paths between subsystems.
- Component Tests test subsystems of the service.
Overlapping these three types of tests ensures good coverage.
Contract Tests should be created when other services integrate with ours, to make sure that their expectations are met. These test only the inputs and outputs. Then you'll know what the affects of changing your service will be.
- End-to-End Tests treat the service as a black box, and tests the whole thing to ensure the business need is met. Hard to write, difficult to use to debug, so don't write a lot.
- Few End-to-End (UI) Tests
- Some Service/Component Tests
- Many Unit Tests
Tests at the top of the pyramid are harder to make, brittle to maintain, require more specialized software, but proves the basic business need and effectiveness.
Tests at the bottom of the pyramid are easier to make, easy to maintain, requires little software, but don't show that the business need is met.
Fun note: If a bug that appears in an End-to-End test, but not in a Unit Test, that says that there's a missing unit test.
A pattern to implement backwards-incompatible changes.
Step 1) Expand. Create or version new endpoints with different inputs and outputs.
Step 2) Migrate. Move clients/consumers from old version to new version, both internal and external clients.
Step 3) Contract. Delete the old endpoints/version.
How to Committees Invent?
The original source of Conway's Law!
Basically shows its point by showing that there is a homomorphism between the design of systems that result from a particular organization, and the organization.
It is an article of faith among experienced, system designers that given any system design, someone someday will find a better one to do the same job. In other words, it is misleading and incorrect to speak of the design for a specific job, unless this is understood in the context of space, time, knowledge, and technology. The humility which this belief should impose on system designers is the only appropriate posture for those who read history or consult their memories
Corollary - A system will reflect the number of people who designed it. A team of two working for 50 days will produce a different system than a team of 50 working for two days.
Assumptions which may be adequate for peeling potatoes and erecting brick walls fail for designing systems.
Demuystifying Conway's Law
Conway's law was been shown to be true, when looking at identical project requirements developed by centrally located teams and remote teams.
Business Capability Matrix
Teams should be aligned by business needs (a product), not project needs (temporary work). This is the agile+devops way, where a team owns the whole shebang.
Narayan says that this leads to needing more developers, since there's not a hand-off-and-forget. I worry far more about a company that needs to scale linearly with number of clients. Scaling linearly with number of products is a godsend compared to that.
(Note: Not my favorite Venn diagram.)
A driving concept from Domain Driven Design. It's not usefull or economical to build a universal model of all things. It's better to have domains that remain extremely separate, and only communicate through smaller, shared concepts.
Each domain has it's own canonical view of the world.
A Conversation with Werner Vogels
Amazon was a monolith from 1996 to 2001. Then instituted a Service Oriented Architecture
- The services coupled data with code
- Access only allowed through endpoints
- No direct DB access
Biggest troubles are testing that all the pieces work together.
Chaotic in a good sense.
How We Ended Up With Microservices
A great article on how they measured their process, then took steps to reduce the waste.
A Value Stream Map shows the stream a feature takes from conception to launch, and the time taken in each step.
Soundcloud took 66 days to deploy a feature. Most of that time was waiting between steps.
They started combining steps by teaming people. Like, frontend and backend were integrated into a team. They individually had more work to do, but reduced back-and-forth time.
Two years into microservices - How we dealt with some of the downsides
- DevOps overhead was so high they hired a full time devops.
- Codebases were too varied, so they built a framework to start all microservices with.
- Too many microservies were hard to keep track of, so they built a centralized documentation service, where each microservice could generate a JSON file explaining their endpoints.
- Since a REST API isn't appropriate for all cases, they built a rabbitmq message bus.
- If the frontend only communicated with the backend, which then communicated with the microservices, you have a single point of failure. Instead the frontend should communicate directly with microservices. (But then you have to use a JSON web token to secure/validate the requests.)
- Tons more requests, slowing down the page load. Cache in every microservice, and use client-side storage where applicable.
How we build microservices at Karma
had problems with scaling different APIs at different rates, versioning libraries, and being unable to experiment with different technologies.
Initially split off (strangler!) huge chunks of functionality, but soon did smaller and smaller chunks.
Like above, comunication happens with a REST API or a message queue. They use both through publish or subscribe queues. Each microservices has a configration file about which it wants, and the publish or subscribe queues are set up automatically. Which is nice, because if the microservice goes down, the queues are still waiting for it.
Problems? Again, testing.