Quantcast
Channel: The Object Oriented Life
Viewing all 37 articles
Browse latest View live

How can Agile help you in clearing the technical debt?

0
0

This post is a write up to my talk titled "How can Agile help you in clearing the technical debt?" presented at the Scrum Bangalore Meetup 2013. It was a short talk for 30 Minutes, So thought of a write up to elaborate the ideas on the slides.





Defining Technical Debt

As per Wikipedia, "Technical debt metaphor referring to the eventual consequences of poor or evolving software architecture and software development within a code base". Technical debt is most understood when it is compared with Cost of Change of software. More Debt means the cost of introducing a change in your system is more.


The term Technical Debt was coined by Ward Cunningham in the year 1992. He used the term to explain to his manager about a particular refactoring which he wanted to do. Since he was working in the financial industry it was easy to communicate in terms of debt. Ward explains more about the metaphor here. The term is immensely popular in communication of the importance of code refactoring to the business and stake holders. Like financial debt technical debt is some times unavoidable but we should take steps to minimize it.


As you can see in the above image when you have technical debt in your system you could either pay the interest or pay back the principle. When we do regular stories we do bit of "extra" work than what ideally needed due to the technical debt. This is an example of the interest payment. It is important that we take some time and effort to pay back the principle by refactoring and cleaning up of the code.


Technical Debt in Agile Context

The below principle from the list of agile principles was the best one I could find which refers to clearing technical debt "Continuous attention to technical excellence and good design enhances agility". In agile projects working software has a lot of emphasis but this in no way implies that the intrinsic value of software can be compromised. As we push for more and more features it is important that we take time to look back at the health of the application. So, make make sure that your agile projects do not become Fragile as it grows.


There are lots of ways to deal with technical debt. I want to talk about 3 simple steps that we have tried in our projects.


1. Tools - Your must have defense

One of the major improvement with the agile development practices is the widespread usage of continuous integration (CI). Your CI environment should be configured to measure the health of the application. It should include the report from static code analysis and test coverage applications. SonarQube is a god platform where you can create a dashboard and see results from the tools mentioned above. It also has a plugin to view the technical debt. Apart from the tools you should be using good programming practices and guidelines.


2. Negotiating with Product Owner

The second techniques that we have tried to minimize technical debt is to negotiate with the product owner and take him on board in the technical improvements that we wanted to do.  You have to Use the opportunity of Change requests to clean up the functionality. We had a module in our application with lots of bugs. There was a major change request which was planned on that module. After much discussions we identified that it would be much easier to rebuild the module than fixing the issues. Once we rebuild the change request was 75% easier to implement compared to the old code.


This means that we have to evaluate the possibility of refactoring with every opportunity and go for it when the time is right. It is the professional responsibility of a developer to communicate dept of technical debt in the application and we should take all steps possible to minimize it. 

We can also have technical Stories in Release planning. Contrary to a user story, a senior developer should describe the success criteria of the story. This is another case where you need to get your product owner on board.


3. Trying out new Stuff

In our projects most things can be done multiple ways and we have to choose one of them. There are also situations where we are not sure about the complexity of the task performed and story point estimation is difficult. We can use spike to deal with such situations. A spike is an experiment that allows developers to learn just enough about something unknown in a user story, e.g. a new technology, to be able to estimate that user story. A spike must be time-boxed. This defines the maximum time that will be spent learning and fixes the estimate for the spike.


A daemon thread is a low priority thread that is running on the operating systems. We have used the same strategy to do a major change in our application. It was upgrading our frameworks (Spring, Hibernate and Tapestry) to their latest versions. Since it is a time consuming activity, we do not want to stop everything else and work on it. So, we choose to keep doing it as a low priority tasks and it took us 10 sprints to complete. Since the team continuously delivered features during this period it was very easy to get the buy in from product owner.

Useful Links:-

Note:- There are many ways to define and minimize technical debt. I have discussed few ways that we have successfully tried. You can go through the above links to find more information and choose the best methods that suits you.


On the way to a blogger - My Journey with this blog so far

A MindMap for Java Developer Interviews

0
0

Over the years I have been a panelist in many of the interviews for Java Developers. I have previously written a post titled Top 7 tips for succeeding in a technical interview for software engineers which covers few of the general guidelines. In this post I will share a mind map containing general topics covered in a Java developer interview. I have prepared this as a general reference for myself to remember the pointers and to keep a common standard across the multiple interviews.



XMind gives a nice listing of the map. You can find the map hereHere is Image which you can download and use.




Finally here is a old fashioned tabbed content list which is easier to copy paste.

Java-Topics
OOPs
Encapsulation
Abstraction
Inheritance
Interface - Abstract Class
Casting
IS-A vs HAS-A Relationships
Aggregation vs Composition
Plymorphism
Method overloading vs Method Overloading
Compile time vs Runtime
Threads
Creating threads
Multitasking
Synchronization
Thread Transitions
Marker Interface
Serialization
Clonnable
Shallow copy vs Deep Copy
Collections
Map, List and Set
Equals - Hashcode
Legacy - Synchronized Classes
JVM
Stack vs Heap Memory
Garbage Collection
JRE, JVM, JDK
Class loaders
Exception
Checked Vs Unchecked Exceptions
Exception handling best practices
try, catch, finally, throw, throws
APIs
Files
String - StringBuffer - String Builder
Java IO
XML
SAX Based & DOM Based
JAXB - Java API for XML Binding
Access specifier 
Access modifier 
public
protected
deafult
private
final
static
synchronized
abstract
transient
volatile
 Inner/Nested Classes
JavaEE Basics
Packaging the Applications
WAR
EAR
Basics
MVC
Servlets
Listeners
Lifecycle
JSPs
APIs
JPA
JAX-WS
SOAP, WSDL Webservices basics
Contract first vs
JAX-RS
RESTful  and its advantages

JSF


This is a work in progress and I hope to refine it further. Let me know if you have any comments.

Does the View in database reduce the query performance?

0
0

Not really!! Here is why...

Few days back, I was speaking about one of our newly created Filter API with server side pagination to my friend. In the implementation we were querying the view to get the data. Since the pagination was at the server side, at a time the query will return only 10 records. His question was since views internally going to use the tables for querying, will it end up being less performative? Will it create the result set of all data and then pick the top 10 items?

My initial reaction was to agree with question and confess we have an issue. But I was sure that this can not be the case. If this is so then we could have had severe issues. I am not much of a DB guru myself so went and did some research with others to figure out the story behind the scenes. By the way we use MS SQL Server 2008 and I am not sure if this descriptions applies to other database or not. 

The question to ask was. We know that the views are used to simplify other queries or standardize access to data. but do we compromise on speed while doing so? 

I got a related answer from SO. When an SQL statement references a nonindexed view, the parser and query optimizer analyze the source of both the SQL statement and the view and then resolve them into a single execution plan. There is not one plan for the SQL statement and a separate plan for the view. So it is fine to use the views for my above use case.

Further, I got to know about Indexed views from this discussions, which can further improve the performance. Microsoft has a nice documentation on Improving Performance with SQL Server 2005 Indexed Views.


Why is tomcat a Webserver and not an Application Server

0
0
Many application developers do not focus much on the infrastructure on which their code runs. When it comes to web applications there are common confusions like what is the difference between webserver and applications server or when to go for a EAR vs WAR file deployment etc...

There are many good answers that differentiate between web servers and applications servers like this one. Most of the times the terms Web Server and Application server are used interchangeably. This article explains the working of a typical web server. Typically we get confused with the example of Tomcat Server (an example for a web server) having the capability to run the enterprise applications. So, tomcat is a web server or an application server? Let me tell you how I convinced my self regarding this.

Some time back I was struck with the question What's the difference between JPA and Hibernate on stack overflow. I did answer it, but one of the comment lead me to a more detailed understanding of the JavaEE spec and certified servers. If you can understand this then differentiating between the web server and application server is easy. During my investigations I got this article, which discusses the advantages of both.

A more detailed look in to the meaning JavaEE specification will throw some light in to our discussions. As we know specifications are set of rules. Simply put they contain the interface. Any JavaEE servers which needs to comply to spec needs to have the implementation of these interfaces. You can find the certified JavaEE servers list here. If you are deploying your enterprise applications (means you have JPA, EJB or some technology which is part of Java EE) to the a server which comply to JavaEE then the lib need not contain the API implementation jars. But these are needed if you are using a web server like tomcat for deployment.

For example, if you use JPA in your applications and deploying it to the  Jboss AS 7, then you need any additional jars in the lib. But the same application you want to deploy to the tomcat server then you need to have additional jars to lib that implements the JPA spec may be eclipselink or Hibernate. This is what makes JBoss AS 7 an application server and tomcat a web server. Another key difference is that we can not deploy an EAR file to tomcat, it could only handle WAR files.

4 simple steps to migrate legacy projects from Ant to Maven

0
0
For some time we were thinking about migrating our build to maven from ant. It happened last month and was actually simpler than what we have anticipated. From my experience, here is a brief about the steps we have followed. Our application is a  is a enterprise web application build with multiple frameworks and technologies and is deployed as a single WAR.

1. Create maven project directory structure.

As told in the Maven user guide create the below directory structure. We have done it under a new folder for the project.

2. Move the files/folders keeping the SCM logs. 

Even though the folder structure is new the source files will be old ones! We want to keep the SCM logs while moving them to new locations. Remember to commit the folders created in step 1 before you start moving your files. If you use SVN, see this user guide or SO question on how to do this. Move the java source, unit/integration test and configuration resources to appropriate folders.

3. Create the POM and add dependencies

Most critical part in the migration is adding the dependencies in the POM. Start by adding the dependencies for the frameworks used in your application. Make sure you are adding the right version of the jars. You can find the version of the jar by reading the MANIFEST.MF file inside the META-INF folder of the jar. This will help if the versions are missing from the file name.

Any third party jars can be added to the maven repository as told here. If you are using very old versions of jar files some of them may not be available in maven repository.Here you can either try upgrading to newer versions or prepare local install as told before. Once you have added all the dependencies try building the application. Watch out for any major issues.

4. Make sure you haven't changed much in the WAR

Maven is a build tool. This means your WAR should not change. So, in the last step we compare both versions and make sure they are the same. Make sure you are on top of all the differences. Also, compare the jar files generated by maven and your existing files, Sync them by 
     - Adding <exclusions> to remove the unwanted jars
     - Add the dependencies for the missing jars
This can be a tiring tasks depending on the number of jars you have in your lib. But make sure that you have each one covered and know that why they exists in your app.

May  be this is a late post, most applications might have already been migrated by now. Anyways, better late than never! According to many experts Gradle is also a good choice as a build tool for your new project.

TalkNotes - The story of SonarQube told to a DevOps Engineer

0
0
This week I spoke at Bangalore DevOps meetup on the topic "The story of SonarQube told to a DevOps Engineer". I have started writing TalkNotes inspired from Martin Fowler.Unlike his detailed article my posts aims to help the audience better understand my slides. SonarQube is a open source code quality management platform. It was a 30 mins talk focused at the need, setup, CI Infrastructure and administration of the SonarQube to the DevOps community.


I have started the talk with one of my favorite subject Technical Debt. We have also looked at some of the parameters which determines the quality are coding standards breach, duplication, lack of unit tests, bad distribution of complexity, Spaghetti Design etc... I have spoke about this in more detail at previous post. There are various existing tools that helps reduce the technical debt by improve the code quality. What was missing was a easier way of tracking these code rule violations. For examples I need to know how much debt was introduced or was cleaned up? As a developer how do you quantify improvement the which a particular code refactoring has brought to the team etc..

This is where Sonar come to your help. Sonar's rich feature set allows you to do these and more. Currently it can run the quality analysis on more than 20 languages including Java, C#, C/C++, PL/SQL, Javascript, PHP, Web, XML etc.... It stores the analysis results and the data is displayed through various dashboards. Further slides discusses the sonar platform overview and installation.

The below diagram shows the CI environment including SonarQube. The Hudson plugin for SonarQube can be configured by following the wiki.


Image Idea from this blog.

The best part of sonar is its documentation. This was the most comprehensive documentation I have read about any open source product. You just need their wiki page to get 99% of answers. Now that I have it configured hope to write more about it in the coming moths.

Integration Testing for Spring Applications with JNDI Connection Pools

0
0
We all know we need to use connection pools where ever we connect to a database. All of the modern drivers using JDBC type 4 supports it. In this post we will have look at an overview of connection pooling in spring applications and how to deal with same context in a non JEE enviorements (like tests). 

Most examples of connecting to database in spring is done using DriverManagerDataSource. If you don't read the documentation properly then you are going to miss a very important point.

NOTE: This class is not an actual connection pool; it does not actually pool Connections. It just serves as simple replacement for a full-blown connection pool, implementing the same standard interface, but creating new Connections on every call.
Useful for test or standalone environments outside of a J2EE container, either as a DataSource bean in a corresponding ApplicationContext or in conjunction with a simple JNDI environment. Pool-assuming Connection.close() calls will simply close the Connection, so any DataSource-aware persistence code should work.

Yes, by default the spring applications does not use pooled connections. There are two ways to implement the connection pooling. Depending on who is managing the pool. If you are running in a JEE environment, then it is prefered use the container for it. In a non-JEE setup there are libraries which will help the application to manage the connection pools. Lets discuss them in bit detail below.

1. Server (Container) managed connection pool (Using JNDI)


When the application connects to the database server, establishing the physical actual connection takes much more than the execution of the scripts. Connection pooling is a technique that was pioneered by database vendors to allow multiple clients to share a cached set of connection objects that provide access to a database resource. The JavaWorld article gives a good overview about this.



In a J2EE container, it is recommended to use a JNDI DataSource provided by the container. Such a DataSource can be exposed as a DataSource bean in a Spring ApplicationContext via JndiObjectFactoryBean, for seamless switching to and from a local DataSource bean like this class.



The below articles helped me in setting up the data source in JBoss AS.


Next step is to use these connections created by the server from the application. As mentioned in the documentation you can use the JndiObjectFactoryBean for this. It is as simple as below



If you want to write any tests using springs "SpringJUnit4ClassRunner" it can't load the context becuase the JNDI resource will not be available.

For tests, you can then either set up a mock JNDI environment through Spring's SimpleNamingContextBuilder, or switch the bean definition to a local DataSource (which is simpler and thus recommended). 

As I was looking for a good solutions to this problem (I did not want a separate context for tests) this SO answer helped me. It sort of uses the various tips given in the Javadoc to good effect. The issue with the above solution is the repetition of code to create the JNDI connections. I have solved it using a customized runner SpringWithJNDIRunner. This class adds the JNDI capabilities to the SpringJUnit4ClassRunner. It reads the data source from "test-datasource.xml" file in the class path and binds it to the JNDI resource with name "java:/my-ds". After the execution of this code the JNDI resource is available for the spring container to consume.



To use this runner you just need to use the annotation @RunWith(SpringWithJNDIRunner.class) in your test. This class extends SpringJUnit4ClassRunner beacuse a there can only be one class in the @RunWith annotation. The JNDI is created only once is a test cycle. This class provides a clean solution to the problem.

2. Application managed connection pool

If you need a "real" connection pool outside of a J2EE container, consider Apache's Jakarta Commons DBCP or C3P0. Commons DBCP's BasicDataSource and C3P0's ComboPooledDataSource are full connection pool beans, supporting the same basic properties as this class plus specific settings (such as minimal/maximal pool size etc).

Below user guides can help you configure this.


The below articles speaks about the general guidelines and best practices in configuring the connection pools.



Note:- All the text in italics are copied from the spring documentation of the DriverManagerDataSource.

EXIN Cloud Computing Foundation Exam Review

0
0
Recently I have attended a workshop on EXIN Cloud Computing Foundation course and cleared the certification exam. This post is bit about topics covered in the exam and my experience learning those topics.








The principles of Cloud Computing. This chapter deals with definitions, types of clouds (Public, Private and Hybrid) and cloud services (IAAS, PAAS, SAAS).
Most contents in the section are from the The NIST Definition of Cloud Computing paper. Other topics include The Evolution Toward Cloud Computing, Cloud Computing Architectures and Benefits and Limitations of Cloud Computing. The part about Virtualization and its role in the raise of Could computing was quite interesting for me.

Using the Cloud. This part is about accessing the cloud and mobility in the cloud.
This module covers the topics Overview of Accessing the Cloud, How Cloud Computing Can Support Business Processes and Service Providers Using the Cloud.


Security and Compliance. Is about the risks of cloud computing and the measures you can take
This module covers the paper Top Threats to Cloud Computing prepared by the Cloud Security Alliance under the Security Risks and Mitigating Measures title. Managing Identity and Privacy section deals with Triple-A authentication and various aspects of identity management.


Implementing and managing Cloud Computing. You learn about local cloud networks and how to support the use of cloud computing
This module includes the topics Building a Local Cloud Environment and Managing Cloud Services. There is a lot of focus on managing cloud services and related governance frameworks.

Evaluation of Cloud Computing. Examples of the subjects here are cost aspects, (dis)advantages and SLA’s.
This module speaks about the business case for cloud computing. For example cost implications to an organization evaluating the cloud services in terms of capex and opex. Forming the Service level requirements and agreements.


The text in italics are taken from the official exam page.
Written with StackEdit.

Identifying the skills gap for a Software Developer

0
0
This April I had to create a Individual Development Plan (IDP) for me as part of the regular official procedures. One of step was to identify the gaps in you compared to the ideal position you want to reach. Thinking more in this line I have created the below table which contains ways to identify the specific areas of development for a developer. 

Guide to reading the table:-

Ask yourself the questions in the column (D). If your answer is "Yes" to any of the questions then you needs to consider the action plans listed in column (E).

A.
Sl No
B. SectionCD. These things happens with YouE. Your Action Plans
1
Understanding what to do 
What to do? (40%)
1. You have missed some of the requirements.
2. You hear others say "This feature was not supposed to work like this"
3. Your completed work gets re-opened during QA or User Testing.
-Improve your domain knowledge.
-Ask more questions to your PO so that you can impove your understanding of the requirements.
-Push for improved requirements documentation.
-Spend more time in testing your features.
-Listen to sprint demos to get the overview of all the new features added.
2
Knowledge of Frameworks,
Design patterns, practices and principles
How to do it - Your skills to do it

(20 - 30 %)
1. You don't know where to start with when you have to implement a new feature
2. You don't know if a similar functionality already exists in the application or not
3. You don't completely understand the frameworks in the application and how they are used
-Pair program with an experienced developer to learn how he approaches a problem.
-Learn more about the frameworks used in your app.
-Try creating sample applications using them.
-Identify the patterns and principles used in your app and try to use them.
3Problem solving, Analytical, Debugging skills
How to do it - Your Ability to do it

(10 - 20 %)
1. You face difficulties when it comes to writing algorithms
2. You are weak in debugging and finding issues in the code
-See if you can apply some known patterns to solve the problem.
4Communicating with your codeHow well you did it?
How easily somebody can understand how ?

(15%)
1. Your code is not up to the standards or frequently ignores code quality.
2. You don't have enough code coverage
3. You can't write a quality documentation
-Use tools like sonar to asses the quality of your code.
-Spend more time in refactoring and improving the code quality.
5Communicating about your work
How well can you communicate about your work

(5%)
1. Your don't follow the process in the team.
2. Your check-in comments are not useful.
3. Your team don't know what you are working on.
-Understand and adhere to the team policies. If you feel that there is somethings wrong, communicate and get it clarified.


This is the first draft of the version I have created. Try to apply this to you or your team and let me know your feedback. I hope I can expand each area by writing more in the future.


Application Security for Java Developers

0
0
Security is a top priority item on everyone's checklist nowadays. In this post, I will introduce you to useful reference material that can help you get started with securing applications. I want to focus more on web applications built with Java related technologies.

1. Authentication and Authorization

When it comes to security the most fundamental concepts are Authentication and Authorization. Unless you have a strong reason you should be following a widely accepted framework for this purpose. We have Java EE Authentication and Spring Security to help us out in this context. I have worked with spring security in the past and it can be customized to suite your specific needs. 

2. Security in the Web Layer

In our application stack the web layer is most vulnarable to attacks. We have may established standard practices and detection mechanisms to minimize these risks. OWASP Top 10 list is a must have check point for security checks. The Open Web Application Security Project (OWASP) mission is to make software security visible, so that individuals and organizations worldwide can make informed decisions about true software security risks.

3. API Security

With the rise of mobile applications and stronger browsers expressing functionalities using the API is more popular day by day. We need to follow the same security practices for the web layer. All the API requests should be authenticated and we should use the principle of least privilege. I found the presentation from Greg Patton in the AppSec EU15 titled The API Assessment Primer is a great start for API security validations. Two major points focused in his talk are,
Do not expose any operations that are not needed
Do not expose any data that is not required

Which is in line with the basic security principle of giving least privilege by default.

To authenticate the services, we can create simple token-based API authentication mechanism based OAuth2 standards. If the services expose any sensitive data, it is better to use https so that man-in-the-middle attacks can be avoided.

4. Validating the User Input

Be aware that any JavaScript input validation performed on the client can be bypassed by an attacker that disables JavaScript or uses a Web Proxy. Ensure that any input validation performed on the client is also performed on the server. Go through the OWASP and WASC checklist to identify the potential validations you need to do in your application.

Other Useful Reference Materials

Do you need microservices architecture?

0
0


Last week I spoke at the Bangalore Software Architects Meetup on the topic "Do you need microservices architecture?". Here is the presentation and bit more info about it.

Over the last few years there has been lot of attention on microservices. After the initial "hype" we saw that what problems it solves and what it can not. I have tried to cover what are microservices and where it can be useful and where it is not. I want to share the guidelines which can be used to choose between a monolith and microservices.

I feel that one must answer the below questions before they choose a microservices architecture and it will be beneficial to you if the answer to these questions are "YES".

1. Does your services represents different business cases/domains..?
2. Does the services needs to be deployed and managed independently..?
3. Does different parts of the application has different scaling/Technology needs..?

A modular monolith can be transformed to a set of microservices in case the need arises. So, we should start with monolith when we are not sure about the future.


An approach to help developers write meaningful tests

0
0
Over the last few years we have been adding unit tests to our existing product to improve its internal quality. During this period we always had the challenge of choosing unit-vs-Integration tests. I would like to mention some of the approaches we have applied to improve the quality of existing system.

At its core, unit testing is about testing a single component at a time by isolating its dependencies. The classical Unit tests have these properties "Fast, Independent, Repeatable, Self-Validating,Timely". Typically in java a method is considered as a unit. So traditional  (and most common) approach is to test the single method of a class separated from all its dependencies.

Interestingly there is no hard-core definition of "what makes a unit". Many times a combination of methods which spread across multiple classes can form a single behavior. So in this context the behavior should be considered as a unit. I have seen people breaking these units and writing multiple tests for the sake of testing a single method. If the intermediate results are not significant this will only increase the complexity of the system. The best way to test a system is to test with its dependencies where ever we can accommodate them. Hence we should try to use the actual implementation and not mocks. Uncle bob puts this point very well, "Mock across architecturally significant boundaries, but not within those boundaries." in his article.

If the software is build with TDD approach it might not be a challenge to isolate dependencies or adding a test for your next feature. But not all software's built like these. Unfortunately we have systems where there are only few or none of the test are written. When working with these systems we can  make use of the above principle and use tests at different levels. Terry Yin provides an excellent graphics (which is show below) in his presentation titled Misconceptions of unit testing. This shows how different tests can add values and what are its drawbacks.



Many of our projects uses Java and Spring framework. We have used springs @RunWith and SpringJUnit4ClassRunner to create AppLevel Tests which gives you the objects with all its dependencies initiated. You could selectively mock certain dependencies if you would like to isolate them. This sets a nice platform to write unit tests with multiple collaborating objects. we call them App level tests. These are still fast running test with no external dependencies. A different term was chosen to differentiate itself from the classical unit test. We also had Integration test which would connect with external systems. So, the overall picture of developer tests can be summarized as below,  



TestsNaming conventionRuns atWhen to useExec Time
Unit TestEnds with TestEvery buildRule based implementations where the logic can be tested in isolationFew Milliseconds
App Level TestsEnds with TestAppEvery build / Nightly builds (Teams choice)Tests the service layers in connection with others. Frees you from creation of mock objects. Application context is loaded in the tests.Few Seconds
Integration TestEnds with TestIntgRuns on demand when a special profile is used in build.All the above + Use when you need to connect to external points like DB, web services etc..Depends on the integration points.
Manually Running TestsEnds withTestIntgManualManually running tests, Used debugging a specific problem locallyAll the above - Can't be automated.Depends on the integration points.


This approach gives the developers choose the right level of abstractions to test and helps in optimizing their time. Nowadays my default choice is App Level tests and I go to unit tests if I have a complicated logic to implement.

Further reading:


Practical communication strategies for software architects

0
0

Here is a video recording of my session titled Practical communication strategies for software architects on Bangalore software architect meetup.


The session covers communication ideas for various stages and to different stakeholders in a project scenario.

 
Practical communication strategies for software architects from Manu Pk


Have a look at the video recording of the session

When to stay with modular monoliths over microservices

0
0
We have seen the developments in the microservices architecture maturing, where by more and more people are trying to evaluate the benefits before jumping on to the unknown trajectory.

In the talk titled When to stay with modular monoliths over microservices at Oracle Code, Bangalore, I tried to discuss these points. You can view the slides below.



According to me, Over simplified version of decision tree come down to two criteria's, Business Context & Relative Scaling. I tried to explore the same in my presentation. As Martin Fowler puts it, you shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile.




Here is a link to the YouTube Recording of the session. Let me know what you think about these topics.


4 ways to contribute to the community for a software developer

0
0
If you are a software professional and looking for something new to start here are the 4 things to try for!

1. Attend a community event or user group gettogether or local meetup


Image result for user group meeting

2. Answer questions at the stackoverflow or contribute to support forums

Image result for stackoverflow

3. Share your experience via blog or twitter or other forums with the community.

Image result for blogger + twitter

4. Contribute to opensource 


Image result for open source

Building Evolutionary Architectures - Book review

0
0






In this blog, I want to speak about the book Building Evolutionary Architectures by Neal Ford, Rebecca Parsons, and Patrick Kua. I have attended Neal's conference talk on this topic and heard from many other speakers about the fitness functions. That’s the reason I wanted to read and understand the concepts mentioned in the book.


As the title implies, the book talks about building evolutionary architecture. The question that the book trying to solve is, how do we make sure, our software architecture stays intact with the changing requirements? How do we build the system which can adapt to future needs or how do we know the decision that we are taking is not impacting the architecture of the system?

The book speaks about fitness functions, to solve this concern. An architectural fitness function provides an objective integrity assessment of some architectural characteristics. So, in a system there may be many characteristics that we want to measure, so you would write separate fitness functions for each of them. In the book, a fitness function is not defined in a concrete way but rather in an abstract form of graphs, tests or any other methods by which we know that our system is doing good with the change. This means you would still need to use your intellect not only to write the fitness function but also to make sense of them.

For me, the best thing about the book is, it provides software architects with a rich vocabulary to communicate their intentions behind their choices to a variety of audiences, including business stakeholders, teams, and engineering leadership. The book also gives you a survey of various architectural approaches. It also talks about some of the practical ideas on how do we implement evolutionary architectures I particularly like the focus on the organizational factors and how does it apply to the software architectures.

In conclusion, I would recommend this book to any software architect. Use it as your communication guide, use it to improve your vocabulary, use it to get a sense of what is happening across the industry, so that you could choose what best for your situation.
Viewing all 37 articles
Browse latest View live




Latest Images