Microservices Interview Questions and Answers

Microservices Interview Questions and Answers
List down the advantages of Microservices Architecture.

Advantages of Microservices Architecture
Advantage
Description
Independent Development
All microservices can be easily developed based on their individual functionality
Independent Deployment
Based on their services, they can be individually deployed in any application
Fault Isolation
Even if one service of the application does not work, the system still continues to function
Mixed Technology Stack
Different languages and technologies can be used to build different services of the same application
Granular Scaling
Individual components can scale as per need, there is no need to scale all components together

What do you know about Microservices?

  • Microservices, aka Microservice Architecture, is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
  • They initially start with a small section using various materials and continue to build a large beehive out of it.
  • These cells form a pattern resulting in a strong structure which holds together a particular section of the beehive.
  • Here, each cell is independent of the other but it is also correlated with the other cells.
  • This means that damage to one cell does not damage the other cells, so, bees can reconstruct these cells without impacting the complete beehive.
Fig 1: Beehive Representation of Microservices
Refer to the above diagram. Here, each hexagonal shape represents an individual service component. Similar to the working of bees, each agile team builds an individual service component with the available frameworks and the chosen technology stack. Just as in a beehive, each service component forms a strong microservice architecture to provide better scalability. Also, issues with each service component can be handled individually by the agile team with no or minimal impact on the entire application.

What are the features of Microservices?
Features of Microservices - Microservices Interview Questions - Edureka
Fig 3: Features of Microservices
  • Decoupling – Services within a system are largely decoupled. So the application as a whole can be easily built, altered, and scaled
  • Componentization – Microservices are treated as independent components that can be easily replaced and upgraded
  • Business Capabilities – Microservices are very simple and focus on a single capability
  • Autonomy – Developers and teams can work independently of each other, thus increasing speed
  • Continous Delivery – Allows frequent releases of software, through systematic automation of software creation, testing, and approval
  • Responsibility – Microservices do not focus on applications as projects. Instead, they treat applications as products for which they are responsible
  • Decentralized Governance – The focus is on using the right tool for the right job. That means there is no standardized pattern or any technology pattern. Developers have the freedom to choose the best useful tools to solve their problems
  • Agility – Microservices support agile development. Any new feature can be quickly developed and discarded again
What are the best practices to design Microservices?
The following are the best practices to design microservices:

10 Microservices Best Practices

1. The Single Responsibility Principle

Just like with code, where a class should have only a single reason to change, microservices should be modeled in a similar fashion. Building bloated services which are subject to change for more than one business context is a bad practice.

Example: Let’s say you are building microservices for ordering a pizza. You can consider building the following components based on the functionality each supports like InventoryService, OrderService, PaymentsService, UserProfileService, DeliveryNotificationService, etc. InventoryService would only have APIs that fetch or update the inventory of pizza types or toppings, and likewise others would carry the APIs for their functionality. 

2. Have a separate data store(s) for your microservice

It defeats the purpose of having microservices if you are using a monolithic database that all your microservices share. Any change or downtime to that database would then impact all the microservices that use the database. Choose the right database for your microservice needs, customize the infrastructure and storage to the data that it maintains, and let it be exclusive to your microservice. Ideally, any other microservice that needs access to that data would only access it through the APIs that the microservice with write access has exposed. 

3. Use asynchronous communication to achieve loose coupling

To avoid building a mesh of tightly coupled components, consider using asynchronous communication between microservices. 

a. Make calls to your dependencies asynchronously, example below. 

Example: Let’s say you have a Service A that calls Service B. Once Service B returns a response, Service A returns success to the caller. If the caller is not interested in Service B’s output, then Service A can asynchronously invoke Service B and instantly respond with a success to the caller. 

b. An even better option is to use events for communicating between microservices. Your microservice would publish an event to a message bus either indicating a state change or a failure and whichever microservice is interested in that event, would pick it up and process it. 

Example: In the pizza order system above, sending a notification to the customer once their order is captured, or status messages as the order gets fulfilled and delivered, can happen using asynchronous communication. A notification service can listen to an event that an order has been submitted and process the notification to the customer.

4. Fail fast by using a circuit breaker to achieve fault tolerance

If your microservice is dependent on another system to provide a response, and that system takes forever to respond, your overall response SLAs will be impacted. To avoid this scenario and quickly respond, one simple microservices best practice you can follow is to use a circuit breaker to timeout the external call and return a default response or an error. The Circuit Breaker pattern is explained in the references below. This will isolate the failing services that your service is dependent on without causing cascade failures, keeping your microservice in good health. You can choose to use popular products like Hystrix that Netflix developed. This is better than using the HTTP CONNECT_TIMEOUT and READ_TIMEOUT settings as it does not spin up additional threads beyond what’s been configured.

5. Proxy your microservice requests through an API Gateway

Instead of every microservice in the system performing the functions of API authentication, request / response logging, and throttling, having an API gateway doing these for you upfront will add a lot of value. Clients calling your microservices will connect to the API Gateway instead of directly calling your service. This way you will avoid making all those additional calls from your microservice and the internal URLs of your service would be hidden, giving you the flexibility to redirect the traffic from the API Gateway to a newer version of your service. This is even more necessary when a third party is accessing your service, as you can throttle the incoming traffic and reject unauthorized requests from the API gateway before they reach your microservice. You can also choose to have a separate API gateway that accepts traffic from external networks. 

6. Ensure your API changes are backwards compatible

You can safely introduce changes to your API and release them fast as long as they don’t break existing callers. One possible option is to notify your callers , have them provide a sign off for your changes by doing integration testing. However, this is expensive, as all the dependencies need to line up in an environment and it will slow you down with a lot of coordination . A better option is to adopt contract testing for your APIs. The consumers of your APIs provide contracts on their expected response from your API. You as a provider  would integrate those contract tests as part of your builds and these will safeguard against breaking changes. The consumer can test against the stubs that you publish as part of the consumer builds. This way you can go to production faster with independently testing your contract changes.

7. Version your microservices for breaking changes

It's not always possible to make backwards compatible changes. When you are making a breaking change, expose a new version of your endpoint while continuing to support older versions. Consumers can choose to use the new version at their convenience. However, having too many versions of your API can create a nightmare for those maintaining the code. Hence, have a disciplined approach to deprecate older versions by working with your clients or internally rerouting the traffic to the newer versions.

8. Have dedicated infrastructure hosting your microservice

You can have the best designed microservice meeting all the checks, but with a bad design of the hosting platform it would still behave poorly. Isolate your microservice infrastructure from other components to get fault isolation and best performance. It is also important to isolate the infrastructure of the components that your microservice depends on.

Example: In the pizza order example above, let's say the inventory microservice uses an inventory database. It is not only important for the Inventory Service to have dedicated host machines, but also the inventory database needs to have dedicated host machines.


How does Microservice Architecture work?
A microservice architecture has the following components:
Working of Microservices Architecture - Microservices Interview Questions - Edureka
Fig 5: Architecture of Microservices
  • Clients – Different users from various devices send requests.
  • Identity Providers – Authenticates user or clients identities and issues security tokens.
  • API Gateway – Handles client requests.
  • Static Content – Houses all the content of the system.
  • Management –  Balances services on nodes and identifies failures.
  • Service Discovery – A guide to find the route of communication between microservices.
  • Content Delivery Networks – Distributed network of proxy servers and their data centers.
  • Remote Service – Enables the remote access information that resides on a network of IT devices.

What are the pros and cons of Microservice Architecture?
Pros of Microservice Architecture
Cons of Microservice Architecture
Freedom to use different technologies
Increases troubleshooting challenges
Each microservices focuses on single capability
Increases delay due to remote calls
Supports individual deployable units
Increased efforts for configuration and other operations
Allow frequent software releases
Difficult to maintain transaction safety
Ensures security of each service
Tough to track data across various boundaries
Mulitple services are parallelly developed and deployed
Difficult to code between services


What is the difference between Monolithic, SOA and Microservices Architecture?
Monolithic vs SOA vs Microservices - Microservices Interview Questions - Edureka
Fig 6: Comparison Between Monolithic SOA & Microservices – Microservices Interview Questions 
  • Monolithic Architecture is similar to a big container wherein all the software components of an application are assembled together and tightly packaged.
  • Service-Oriented Architecture is a collection of services which communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity.
  • Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
What are the challenges you face while working Microservice Architectures?
Developing a number of smaller microservices sounds easy, but the challenges often faced while developing them are as follows.
  • Automate the Components: Difficult to automate because there are a number of smaller components. So for each component, we have to follow the stages of  Build, Deploy and, Monitor.
  • Perceptibility: Maintaining a large number of components together becomes difficult to deploy, maintain, monitor and identify problems. It requires great perceptibility around all the components.
  • Configuration Management: Maintaining the configurations for the components across the various environments becomes tough sometimes.
  • Debugging: Difficult to find out each and every service for an error. It is essential to maintain centralized logging and dashboards to debug problems.
What are the key differences between SOA and Microservices Architecture?
The key differences between SOA and microservices are as follows:
SOA
Microservices
Follows “share-as-much-as-possible” architecture approach
Follows “share-as-little-as-possible” architecture approach
Importance is on business functionality reuse
Importance is on the concept of “bounded context
They have common governance and standards
They focus on people collaboration and freedom of other options
Uses Enterprise Service bus (ESB) for communication
Simple messaging system
They support multiple message protocols
They use lightweight protocols such as HTTP/REST etc.
Multi-threaded with more overheads to handle I/O
Single-threaded usually with the use of Event Loop features for non-locking I/O handling
Maximizes application service reusability
Focuses on decoupling
Traditional Relational Databases are more often used
Modern Relational Databases are more often used
A systematic change requires modifying the monolith
A systematic change is to create a new service
DevOps / Continuous Delivery is becoming popular, but not yet mainstream
Strong focus on DevOps / Continuous Delivery

What are the characteristics of Microservices?
You can list down the characteristics of microservices as follows:
Characteristics of Microservices - Microservices Interview Questions - Edureka
Fig 7: Characteristics of Microservices



How does Hystrix implements Bulkhead Design Pattern?
One of the most important aspects of a microservice architecture is resiliency.
The available resources of a client that consumes a remote service should never be exhausted if the service is failing, but instead, they should be released fast for further use.
Netflix’s Hystrix implements common resiliency patterns like circuit breaker and bulkhead that help us design a highly resilient microservice architecture.
A client consuming one or more remote services can harness the power of Hystrix by providing a fallback strategy when the service fails to deliver. Hystrix in this case ensures that ailing services don’t bring the whole system down.
Any method annotated with @HystrixCommand is managed by Hystrix, and therefore, is wrapped by a proxy that manages all calls to that method through a separate, initially fixed thread pool.
The following illustration gives a high-level overview of the Hystrix’s default thread pool that manages all calls to the methods managed by it.
https://miro.medium.com/max/3122/1*q61pYOshiTpYTb0bb-G7Yw.png
Hystrix default pool thread
BydefaultHystrix’s thread pool contains ten threads for processing Hystrix wrapped calls. So if your client application is making a large number ofHystrix wrapped calls (which can be calls to a remote database or a service)then the available threads will get exhausted in a short period of time and the client will fail.
To handle such cases, Hystrixallows you to create separate thread pools for “every” remote resource call (Bulkhead Design pattern implementation). So, If one resource call may be using all the available resources, only the associated thread pool is likely to fail, while other parts of the client remain intact.
The bulkhead pattern is displayed in the following illustration.
https://miro.medium.com/max/3216/1*N9QfRL8n_tP8Y_SZZT8Miw.png
The bulkhead pattern
If the database is performing slowly, it will impact other remote calls in the Thread Pool 1, while the other thread pools managed by Hystrix remain intact and proceed with their respective remote calls.
Supposing you have set up a service that calls another service through a FeignClient, as in the following scenario:
@Autowired
private InstitutionManagementClientinstitutionManagementClient; // This is the Feign // client

@HystrixCommand(fallbackMethod = “getAssociatedInstitutionFallback”,
                               threadPoolKey = “threadPoolInstitution”,
                               threadPoolProperties = {
                                              @HystrixProperty(name = “coreSize”, value = “15”),
                                              @HystrixProperty(name = “maxQueueSize”, value = “5”)
}
)
public Institution getAssociatedInstitution(String agentId) {
               institutionManagementClient.getInstitutionForAgent(agentId);
}
public Institution getAssociatedInstitutionFallback(String agentId) {
               return Institution.title(“Null institution”).agent(agentId).build();
}

Here,weare defining a custom thread pool for the remote service call. The threadPoolKey keyword defines a unique name for the thread pool.

threadPoolProperties attribute lets you define and customize the behaviour of the threadPool.
coreSize property specifies the size of the newly created thread pool (by default is 10) as 15.
You can also set up a queue in front of the thread pool that will control how many requests will be allowed to back up when the threads in the thread pool are busy. This queue size is set by the maxQueueSize attribute. Once the number of requests exceeds the queue size, any additional requests to the thread pool will fail until there is room in the queue.
Note: maxQueueSize propertyhas default value as -1 and in this case a Java SynchronousQueue will be used to hold all incoming requests. A synchronous queue will essentially enforce that you can never have more requests in process than the number of threads available in the thread pool.
Setting the maxQueueSize to a value greater than one will cause Hystrix to use a Java LinkedBlockingQueue. The use of a LinkedBlockingQueue allows you to queue up requests even if all threads are busy processing requests.
Beside the bulkhead pattern, the example above includes the Circuit Breaker pattern as well.
The getAssociatedInstitutionFallback method gets invoked every time the call to remote service fails or exceeds timeout.
Note:The fallback method has the exact same signature as the method wrapped by Hystrix.
What will happen if a Hystrix wrapped method continuously pings an unavailable, ailing, or resource-exhausted remote service? Does the fallback method gets continuously invoked?
Hystrix, besides providing means to implement circuit breaker and bulkhead patterns, offers a call monitoring functionality, that continuously monitors the number of times a wrapped method fails within a configurable ten-second window, and if a predefined call fail threshold is reached, the circuit breaker will be tripped and all following calls fail directly until the remote service is up and running.
Extending the above code, we can add commandPoolProperties to customize the default ‘fail-fast’ behavior.
@HystrixCommand(fallbackMethod = “getAssociatedInstitutionFallback”,
                               threadPoolKey = “threadPoolInstitution”,
                               threadPoolProperties = {
                                              @HystrixProperty(name = “coreSize”, value = “15”),
                                              @HystrixProperty(name = “maxQueueSize”, value = “5”)
},
commandPoolProperties = {
               @HystrixProperty(name=”circuitBreaker.requestVolumeThreshold”, value = ”5”)
               @HystrixProperty(name=”circuitBreaker.errorThresholdPercentage”, value = “50”)
}
)

As its name suggests,circuitBreaker.requestVolumeThreshold, defines the number of consecutive calls that must happen within the ten-second window. If this number is reached, then the next property, circuitBreaker.errorThresholdPercentage, defines the percentage of the calls that need to fail (due to timeouts, an exception being thrown, or a HTTP 500 being returned) in order the circuit breaker to be tripped.
After that, all the consecutive calls will fail directly, without calling the faulty service. Now at some point, the application needs to check whether the remote service has recovered. That is also handled by Hystrix and happens after a predefined timeout (ten seconds of course), and it can be overridden by the following property in commandPoolProperties:

@HystrixProperty(name = “circuitBreaker.sleepWindowInMilliseconds”, value = “5000”)
In order to provide a complete fallback strategy, let’s include a timeout property as well. This property controls the time threshold before a fallback strategy is used.
@HystrixCommand(fallbackMethod = “getAssociatedInstitutionFallback”,
                               threadPoolKey = “threadPoolInstitution”,
                               commandProperties = {
               @HystrixProperty(name = “execution.isolation.thread.timeoutInMilliseconds”, value = ”5000”)
},
                               threadPoolProperties = {
                                              @HystrixProperty(name = “coreSize”, value = “15”),
                                              @HystrixProperty(name = “maxQueueSize”, value = “5”)
},
commandPoolProperties = {
   @HystrixProperty(name=”circuitBreaker.requestVolumeThreshold”, value = ”5”)
   @HystrixProperty(name=”circuitBreaker.errorThresholdPercentage”, value = “50”)
   @HystrixProperty(name="circuitBreaker.sleepWindowInMilliseconds",
       value="7000"),
   @HystrixProperty(name="metrics.rollingStats.timeInMilliseconds", value="15000")         
@HystrixProperty(name="metrics.rollingStats.numBuckets", value="5")}
}
)

The metrics.rollingStats.timeInMilliseconds is used to control the size of the window that will be used by Hystrix to monitor for problems with a service call. The default value for this is 10,000 milliseconds (that is, 10 seconds). 
The second property, metrics.rollingStats.numBuckets, controls the number of times statistics are collected in the window you’ve defined. Hystrix collects metrics in buckets during this window and checks the stats in those buckets to determine if the remote resource call is failing.
The number of buckets defined must evenly divide into the overall number of milliseconds set for rollingStatus.inMilliseconds stats. For example, in your custom settings in the previous listing, Hystrix will use a 15-second window and collect statistics data into five buckets of three seconds in length.


How to handle versioning of microservices?
There are different ways to handle the versioning of your REST api to allow older consumers still consume the older endpoints.

The ideal practice is that any non-backward compatible change in a given REST endpoint shall lead to a new versioned endpoint.
Different mechanisms of versioning are:
1. Add version in the URL itself as a path param
2. Add version in API request header or add it as request query parameter.

Most common approach in versioning is the URL versioning itself.
A versioned URL looks like the following:
http://<host>:<port>/api/v1/....
http://<host>:<port>/api/v2/....

As a developer you must ensure that only backward compatible changes are accommodated in a single version of URL.
Consumer-Driven-Tests can help identify potential issues with API upgrades at an early stage.


Is it a good idea to share common database across multiple microservices?
The short answer is yes.
In microservices architecture, each microservice shall own its private data which can only be accessed by outside world through owning service.
If we start sharing microservice’s private data with other services, then we will violate principle of Bounded Context.

Practically we have three approaches:
1.   Database server per microservice: Each microservice will have its own database server instance. This approach has overhead of maintaining database instance and its replication/backup, hence its rarely used in practical environment.
2.   Schema per microservice: Each microservice owns a private database schema which is not accessible to other services. Its most preferred approach for RDBMS database (MYSql, Postgress etc.)
3.   Private tables per microservice: Each microservice owns a set of tables that must only be accessed by that service.
Its logical separation of data. This approach is mostly used for hosted database as service solution (Amazon RDS).

Tables-Per-Service approach – Amazon DynamoDB
If we are using database as a service (like AWS DynamoDB), then you shall prefer private-tables-per-service approach, where each microservice owns a set of tables that must only be accessible by that service. It is mostly a logical separation of data. In this way we can have a single DynamoDB instance for the entire fleet of microservices.

Private-tables-per-service and schema-per-service have the lowest overhead.  
Using a schema per service is appealing since it makes ownership clearer. 
It might also make sense to have a polyglot persistence architecture. For each service you choose the type of database that is best suited to that service’s requirements. For example, a service that does text searches could use ElasticSearch. A service that manipulates a social graph could use Neo4j. It might not make sense to use a relational database for every service.

Downsides to keeping a service’s persistent data private:
It can be challenging to implement business transactions that update data owned by multiple services. Rather than using distributed transaction, you typically must use an eventually consistent, event-driven approach to maintain database consistency.
It is difficult to implement some queries because you can’t do database joins across the data owned by multiple services. Sometimes, you can join the data within a service. In other situations, you will need to use Command Query Responsibility Segregation (CQRS) and maintain denormalizes views.
Sometimes  services need to share data. For example, let’s imagine that several services need access to user profile data. One option is to encapsulate the user profile data with a service, that’s then called by other services. Another option is to use an event-driven mechanism to replicate data to each service that needs it.


How will you make sure that the email is only sent if the database transaction does not fail?
Consider a hypothetical scenario where we are registering a new user and in the same transactional method we are sending a welcome email to user.
@Transactional
public void register(UserDtodto){
validateUser(dto);
populate(dto);
save(dto);
sendEmail();
}

We are sending email to user in the last line of transactional method, before the database flush happens.
Now consider that database transaction fails due to a concurrent modification exception, after the email has been sent to the user. That’s a bad experience.
Such scenarios are very common in distributed system, you may already have faced them in some way.
Spring provides four main mechanisms to handle this situation:

1.   Using @TransactionalEventListener: For listening transaction success event, our register method shall broadcast CreationEvent, and a new Component that listens to this event shall trigger the welcome email to the user.

Using Spring Events (Transactional)

public class UserController {
   @Autowired
   private ApplicationEventPublisher publisher;

   @Transactional
              public void register(UserDtodto){
validateUser(dto);
populate(dto);
save(dto);
publisher.publishEvent(new UserCreatedEvent(dto));
    }
}

@Component
public class UserGreetHandler {
@TransactionalEventListener
public void handleOrderCreatedEvent(CreationEvent<UserDto>creationEvent) {
sendEmail(creationEvent.getEmail());
    }
}

2.    Using @TransactionSynchronizationManagerto attach a listener that gets invoked when transaction completes successfully.
Using TransactionSynchronizationManager
   @Transactional
              public void create(String fname, String lname, String email) {
User user = new User();
user.setEmail(email);
user.setFname (fname);
user.setLname (lname);
userRepository.save(user);

TransactionSynchronizationManager.registerSynchronization(
  new TransactionSynchronizationAdapter() {
public void afterCommit() {
        // do stuff right after commit is successful
sendEmail(user.getEmail());
    }

 })
    }

Here, Email is sent right after the commit is successful. But this approach makes our design tightly coupled, as two separate functionalities are executed inside a single method.
But the above mentioned code does not guarantee the atomicity of two operations i.e. if the transaction is successful but email sent fails then spring will not rollback the transaction.


What is an Idempotence and where it is used?
Idempotence is the property of being able to do something twice in such a way that the end result will remain the same i.e. as if it had been done once only.
Usage: Idempotence is used at the remote service, or data source so that, when it receives the instruction more than once, it only processes the instruction once.

What is Two Factor Authentication?

Two-factor authentication enables the second level of authentication to an account log-in process.
2 Factor Authentication - Microservices Interview Questions - Edureka
Fig11: Representation of Two Factor Authentication
So suppose a user has to enter only username and password, then that’s considered a single-factor authentication.
Q28. What are the types of credentials of Two Factor Authentication?
The three types of credentials are:
Types of Credentials - Microservices Interview Questions - Edureka
Fig 12: Types of Credentials of Two Factor Authentication
What are Client certificates?
A type of digital certificate that is used by client systems to make authenticated requests to a remote server is known as the client certificate. Client certificates play a very important role in many mutual authentication designs, providing strong assurances of a requester’s identity.

What is OAuth?
OAuth stands for open authorization protocol. This allows accessing the resources of the resource owner by enabling the client applications on HTTP services such as third-party providers Facebook, GitHub, etc. So with this, you can share resources stored on one site with another site without using their credentials.

What is End to End Microservices Testing?
End-to-end testing validates each and every process in the workflow is functioning properly. This ensures that the system works together as a whole and satisfies all requirements.
In layman terms, you can say that end to end testing is a kind of tests where everything is tested after a particular period.

What is the use of Container in Microservices?
Containers are a good way to manage microservice based application to develop and deploy them individually. You can encapsulate your microservice in a container image along with its dependencies, which then can be used to roll on-demand instances of microservice without any additional efforts required.


What do you understand by Semantic monitoring in Microservices architecture?
Semantic monitoring, also known as synthetic monitoring combines automated tests with monitoring the application in order to detect business failing factors.

What is the difference between Mock or Stub?

Stub
  • A dummy object that helps in running the test.
  • Provides fixed behavior under certain conditions which can be hard-coded.
  • Any other behavior of the stub is never tested.
For example, for an empty stack, you can create a stub that just returns true for empty() method. So, this does not care whether there is an element in the stack or not.
Mock
  • A dummy object in which certain properties are set initially.
  • The behavior of this object depends on the set properties.
  • The object’s behavior can also be tested.
For example, for a Customer object, you can mock it by setting name and age. You can set age as 12 and then test for isAdult() method that will return true for age greater than 18. So, your Mock Customer object works for the specified condition.


What do you mean by Continuous Integration (CI)?
Continuous Integration (CI) is the process of automating the build and testing of code every time a team member commits changes to version control. This encourages developers to share code and unit tests by merging the changes into a shared version control repository after every small task completion.

Q47. What is Continuous Monitoring?
Continuous monitoring gets into the depth of monitoring coverage, from in-browser front-end performance metrics, through application performance, and down to host virtualized infrastructure metrics.

Q48. What is the role of an architect in Microservices architecture?
An architect in microservices architecture plays the following roles:
  • Decides broad strokes about the layout of the overall software system.
  • Helps in deciding the zoning of the components. So, they make sure components are mutually cohesive, but not tightly coupled.
  • Code with developers and learn the challenges faced in day-to-day life.
  • Make recommendations for certain tools and technologies to the team developing microservices.
  • Provide technical governance so that the teams in their technical development follow principles of Microservice.
Q49. Can we create State Machines out of Microservices?
As we know that each Microservice owning its own database is an independently deployable program unit, this, in turn, lets us create a State Machine out of it. So, we can specify different states and events for a particular microservice.
For Example, we can define an Order microservice. An Order can have different states. The transitions of Order states can be independent events in the Order microservice.

Q: How will you monitor multiple microservices for various indicators like health?
A:
 Spring Boot provides actuator endpoints to monitor metrics of individual microservices. These endpoints are very helpful for getting information about applications like if they are up, if their components like database etc are working good. But a major drawback or difficulty about using actuator enpoints is that we have to individually hit the enpoints for applications to know their status or health. Imagine microservices involving 50 applications, the admin will have to hit the actuator endpoints of all 50 applications. To help us deal with this situation, we will be using open source project located at https://github.com/codecentric/spring-boot-admin.
Built on top of Spring Boot Actuator, it provides a web UI to enable us visualize the metrics of multiple applications.
Spring Boot Admin Example

Q: What does one mean by Service Registration and Discovery ? How is it implemented in Spring Cloud
A:
 When we start a project, we usally have all the configurations in the properties file. As more and more services are developed and deployed, adding and modifying these properties become more complex. Some services might go down, while some the location might change. This manual changing of properties may create issues.
Eureka Service Registration and Discovery helps in such scenarios. As all services are registered to the Eureka server and lookup done by calling the Eureka Server, any change in service locations need not be handled and is taken care of
Microservice Registration and Discovery with Spring cloud using Netflix Eureka.

Q: What are the different Microservices Design Patterns?
A:
 The different Microservices Design Patterns are -

  • Aggregator Microservice Design Pattern
  • API Gateway Design Pattern
  • Chain of Responsibility Design Pattern
  • Branch Microservice Design Pattern
  • Circuit Breaker Design Pattern
  • Asynchronous Messaging Design Pattern


Q: What does one mean by Load Balancing ? How is it implemented in Spring Cloud
A:
 In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process.
In SpringCloud this can be implemented using Netflix Ribbon.
Spring Cloud- Netflix Eureka + Ribbon Simple Example

Q: How to achieve server side load balancing using Spring Cloud?
A:
 Server side load balancingcan be achieved using Netflix Zuul.
Zuul is a JVM based router and server side load balancer by Netflix.
It provides a single entry to our system, which allows a browser, mobile app, or other user interface to consume services from multiple hosts without managing cross-origin resource sharing (CORS) and authentication for each one. We can integrate Zuul with other Netflix projects like Hystrix for fault tolerance and Eureka for service discovery, or use it to manage routing rules, filters, and load balancing across your system.
Spring Cloud- Netflix Zuul Example

Q: In which business scenario to use Netflix Hystrix ?
A:
 Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable.
Usually for systems developed using Microservices architecture, there are many microservices involved. These microservices collaborate with each other.
Consider the following microservices-
Microservice Interview Questions
Suppose if the microservice 9 in the above diagram failed, then using the traditional approach we will propagate an exception. But this will still cause the whole system to crash anyways.
This problem gets more complex as the number of microservices increase. The number of microservices can be as high as 1000. This is where hystrix comes into picture-
We will be using two features of Hystrix-
  • Fallback method
  • Circuit Breaker
Spring Cloud- Netflix Eureka + Ribbon + Hystrix Simple Example

Q: What is Spring Cloud Gateway? What are its advantages over Netflix Zuul?
A:
 Zuul is a blocking API. A blocking gateway api makes use of as many threads as the number of incoming requests. So this approach is more resource intensive. If no threads are available to process incoming request then the request has to wait in queue.
In this tutorial we will be implementing API Gateway using Spring Cloud Gateway. Spring Cloud Gateway is a non blocking API. When using non blocking API, a thread is always available to process the incoming request. These request are then processed asynchronously in the background and once completed the response is returned. So no incoming request never gets blocked when using Spring Cloud Gateway.
Spring Cloud Gateway Tutorial - Hello World Example

Q: What is Spring Cloud Bus? Need for it?
A:
 Consider the scenario that we have multiple applications reading the properties using the Spring Cloud Config and the Spring Cloud Config in turn reads these properties from GIT.
Consider the below example where multiple employee producer modules are getting the property for Eureka Registration from Employee Config Module.
cloud-14_1
What will happen if suppose the eureka registration property in GIT changes to point to another Eureka server. In such a scenario we will have to restart the services to get the updated properties. There is another way of using Actuator Endpoint /refresh. But we will have to individually call this url for each of the modules. For example if Employee Producer1 is deployed on port 8080 then call http://localhost:8080/refresh. Similarly for Employee Producer2 http://localhost:8081/refresh and so on. This is again cumbersome. This is where Spring Cloud Bus comes into picture.
cloud-14_2
The Spring Cloud Bus provides feature to refresh configurations across multiple instances. So in above example if we refresh for Employee Producer1, then it will automatically refresh for all other required modules. This is particularly useful if we are having multiple microservice up and running. This is achieved by connecting all microservices to a single message broker. Whenever an instance is refreshed, this event is subscribed to all the microservices listening to this broker and they also get refreshed. The refresh to any single instance can be made by using the endpoint /bus/refresh
Spring Cloud Tutorial - Publish Events Using Spring Cloud Bus

Q: What is Spring Cloud Data Flow? Need for it?
A:
 Spring Cloud Data Flow is a toolkit to build real-time data integration and data processing pipelines by establishing message flows between Spring Boot applications that could be deployed on top of different runtimes.
cloud-15_1
Long lived applications require Stream Applications while Short lived applications require Task Applications.
In this example we make use of Stream Applications. Previously we had already developed Spring Cloud Stream applications to understand the concept of 
Spring Cloud Stream Source and Spring Cloud Sink and their benefit.
cloud-15_5
Pipelines consist of Spring Boot apps, built using the Spring Cloud Stream or Spring Cloud Task microservice frameworks. SCDF can be accessed using the REST API exposed by it or the web UI console.
We can make use of metrics, health checks, and the remote management of each microservice application Also we can scale stream and batch pipelines without interrupting data flows. With SCDF we build data pipelines for use cases like data ingestion, real-time analytics, and data import and export. SCDF is composed of the following Spring Projects-
cloud-15_6

Spring Cloud Tutorial - Stream Processing Using Spring Cloud Data Flow

Q:What is Docker? How to deploy Spring Boot Microservices to Docker?
A:
 
What is Docker
Deploying Spring Based WAR Application to Docker
Deploying Spring Based JAR Application to Docker
Q: How to deploy multiple microservices to docker?
A:
 Deploying Multiple Spring Boot Microservices using Docker Networking

Q: What is Pivotal Cloud Foundry(PCF)?
A:
 Some time back all the IT infrastructure was on premises. There we in house servers managed by an IT resource personnel or a service provider.
Then with advent of cloud services these software and hardware services are now delivered over the internet rather than being on premises.
Evolution of IT Services
Cloud Foundry is an open source, multi-cloud application platform as a service governed by the Cloud Foundry Foundation. The software was originally developed by VMware and then transferred to Pivotal Software, a joint venture by EMC, VMware and General Electric.
It is a service (PaaS) on which developers can build, deploy, run and scale applications.
Many Organizations provide the cloud foundry platform separately. For example following are some cloud foundry providers-
  • Pivotal Cloud Foundry
  • IBM Bluemix
  • HPE Helion Stackato 4.0
  • Atos Canopy
  • CenturyLink App Fog
  • Huawei FusionStage
  • SAP Cloud Platform
  • Swisscom Application Cloud

How do you scale and balance the load of microservices in a distributed system?

what is API Composition Pattern?

You have applied the Microservices architecture pattern and the Database per service pattern. As a result, it is no longer straightforward to implement queries that join data from multiple services.

Problem

How to implement queries in a microservice architecture?

Solution

Implement a query by defining an API Composer, which invoking the services that own the data and performs an in-memory join of the results.


References:
https://www.edureka.co/blog/interview-questions/microservices-interview-questions/