This page lists the various terms that relate to coding principles and the corresponding architecture those principles promote.
Architecture related terms and corresponding discussions are purposefully mentioned at the very last to explicitly highlight where one should identify its importance in the hierarchy of product development. These are important terms - no doubt; And no doubt indeed because the entire book focuses around architecture discussion! However, realize that these discussions must not take precedence over the business itself. If choosing between having a feature out fast which is secure and works as intended but does not comply to architecture patterns, versus putting extra time to have a properly designed feature, one should choose the former option, and incur a technical debt and then try to fix the technical debt subsequently. This way, the product would have been out for some time. However, if a counter concerns is that the product managers will not subsequently allow time for technical debt to get addressed and will bring in a new product feature to be added, then that's a different discussion altogether.
Reference: Wikipedia, article at lmu.edu. A programming paradigm is a style or a way of programming. Some programming languages make it easy to write code with certain paradigms but not others.
The two major paradigms are imperative and declarative style of coding. In the imperative style of coding, the programmer explicitly instructs the machine how to change its state. It has two main features: explicitly stating the order in which operations occur in addition to having constructs that control the order, and, allowing for side effects to occur such that state can be modified at one point in time, within one unit of code, and then later read at a different point in time inside a different unit of code. An imperative style can be considered as implementing some algorithm in explicit steps. In the declarative style, the program describes what must be accomplished in terms of the problem domain, rather than describe how to accomplish it as a sequence of the programming language primitives. For more details, see this question on StackOverflow. Taking an important line from one of the answers: "In any program, you will always have both imperative and declarative codes, what you should aim for is to hide all imperative codes behind the abstractions, so that other parts of the program can use them declaratively."
See article on OOP at Wikipedia. A business software can be broadly categorized as having two components: data, and methods that modify the data. The data can be from different sources, like, inherent business data, user data, data from third party, etc. The core of OOP revolves around the concept of an object which is a combination of some data fields that are highly related to each other, and methods that transform only the data fields of that object by optionally using some external input. Combining the two (i.e., the data fields and the methods) together creates a context to model a real-world entity with which the software interacts. For example, a combination of account holder name, account balance along with method to deposit or withdraw can form the object that models a "bank account". Banking services get built on top of this "bank account object", like, opening an account for someone, making a deposit to the account, withdrawing from account, transferring from one account to another, closing account, etc. A "class" in OOP is like a blueprint for the "object". So, it is a "bank account class" which defines the behavior for "bank account object" of different users, such that each object contains the required raw data fields and methods.
This should be contrasted to object based programming (..not object "oriented" programming) that allow collecting related data fields in a group but without restricting the type of operations that can be carried out on it. In OOP, an object should also exhibit other behaviors like encapsulation, inheritance, composition and polymorphism (Going indiviually into these topics can become a big digression and so it is avoided. However, readers are strongly encouraged to understand the meaning of each of the terms, why it is necessary in OOP and what would go wrong if the feature is absent).
OOP suggests only exposing methods of a class that maps to a certain behavior expected by that class. However, sometimes a method can be complex in its implementation and is best implemented by breaking it down further. In the above example, withdraw step can be broken down to first check if the account has a balance greater than requested withdrawal, after which the amount is withdrawn. Despite the existence of these sub-steps, maybe we don't want any other object to be able to directly invoke these methods. This is a valid requirement and is achieved by modifying "accessibility" of members of the class. One related detail to keep in mind is that the accessibility should be defined based on the behavior being modeled for in the class. It shouldn't be made public accessible just to ease unit testing of the method, as discussed here. On the topic of designing methods, note the command–query separation, or CQS principle which states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both.
A quick final note: If using OOP, it is strongly suggested to name the fields as a noun, either in singular form (like, account_number, balance_amount
, etc.), or in plural form (like account_numbers
or account_number_bag
) because it represents some measurement, or property of a real world entity. Similarly, methods should have names starting with a verb (like, get_account_number, set_balance_amount, delete_account, transfer_amount
, etc.) because it represents an action performed on one or more fields. For the class, a singular proper noun should be used because the class represents a single real word entity, regardless of whether is single type (like, Object, Date, etc.) or a collection type (like, Array, and not Arrays; and List, and not Lists; etc.). Let's say you want to name the field that holds the account number. Using "accountnumber" as the name isn't recommended because it hurts readability, specially for someone new to the code. Unless someone is aware with the product, they wn't know that the field name should be read as "account-number". There are 2 primary conventions that are used. Java, Javascript uses camel case, by which the field name will be written as "accountNumber" (note the capital 'N' indicating the start of new word). C, Python uses snake case, by which the field name will be written as "account_number" (replacing space with an underscore). Also, it is highly suggested to use proper named field and method rather than shortening it to save lines because saving a few code lines don't help with any performance metrics, but it makes reading and understanding the code a hell for everyone and ends up hurting the application development.
See details about functional programming at Wikipedia. Functional programming is a declarative programming paradigm where programs are constructed by applying and composing functions, i.e. the desired method is a function which is itself formed of different functions combined together in some manner, and so forth. There is no concept of variables storing value of the partial computation. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to variables, passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with methods written in imperative programming which can have side effects, like, modifying the program's state. It is claimed that by restricting side effects with pure functional programming, there are fewer bugs, it is easier to debug and test, and be more suited to formal verification.
See article on AOP at Wikipedia. Aspect-oriented programming, or AOP is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding behavior that are not central to the business logic without modifying the code itself. For example, let's say that the primary business logic of some service is to allow the user to retrieve a file. A side-functionality could be to log every service call, including if it was success or failed. This new behavior (i.e., logging) does not contribute in any way to the file retrieval which is the primary function of the service. However, it is something that is expected out of the service. A code written with aspect oriented programming paradigm allows easily adding/removing such extra requirements or "concerns" without tightly coupling it to the core service logic.
Coding principles are guidelines that an industry, organization, team or individual adopt to improve software designs and code implementation (reference: Simplicable, Wikipedia). However, for the purpose of this e-book, only the coding principles that affect backend development will be covered. The ones covering front end, or infrastructure will not be covered. The coding principles adopted by a team itself derives from business priorities identified by the team. For example: consider the principle of "Defensive programming" that suggest handling every scenario that can happen. Contrast it with when a software is in its initial stages as a new product. Not every feature is hashed out because it is not known on how the user will interact with it. Optimizing the product for every scenario may take precious time away from more in-demand features and may factor towards the failure of the product. Listed below are some of the coding principles that I believe are important.
Reference: Wikipedia. The "Black box" is an important principle suggests that a new functionality in a software should behave as per the business requirements, and without needing to know how the actual coding is done; Also, it should be able to do so different inputs. Just like a black box, a software functionality should take a particular set of input every time, and return an output. Consider applying this principle at basic data level, wherein a combination of raw data along with allowed operations that can be done on it forms a black box. Any code the interacts with the cluster does so via the allowed operations and never directly with the data. This is the principle of "encapsulation" in object oriented programming. Going one level up, consider applying this principle to a collection of objects and methods forming a "module" (in Python or Javascript), or a "package" in Java. Doing so suggests that the objects/method should have closely related functionality that is separate from other modules or packages. This is also called code cohesion. Treating software functionality as a black box also helps in easily testing it.
Reference: Wikipedia. In adding a new functionity to a software, as much possible of old code should be used. Practice of reusing code is motivated by additional reasons, like, doing so enables faster development time by saving time to rewrite same code and corresponding tests. It also confirms that other portions of the software have been properly defined (..in terms of what they do) and have been properly "black boxed" because had this not been done, it wouldn't be possible to reuse the existing code. Additionally, if code is not being reused and is instead getting duplicated, then it becomes confusing to know whether the two duplicated code portions should change hand-in-hand or if they represent different features. This principle also relates to the "Don't repeat Yourself", or DRY principle, which states that "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."
Reference: Wikipedia. Also see Offensive programming. The "Fail fast" principle suggests that if the inputs given to a command should result in a failure, then the failure should be identified as quickly possible. The "quick" is not to try and get some performance points, but instead is done with intention to prevent any unintended side-effects from happening before the code fails. In contrast to defensive programming, having a fail-fast behavior ensures setting bounds to limits of execution for the software; And if the customer encounters any such limits, it will be immediately visible and can be reported to product managers for remediation. This further ties in with the "Worse is better" principle.
This line is used in different articles to support the idea that development time should not be wasted in making nit-pick improvements. This also sees a support from business viewpoint that more priority and focus should be towards features and developments that bring in revenue or clear blocks that may check future growth. Similar aphorisms also exist, like, "If it ain't broke, don't fix it", and are added to further drive the message to not go into optimizing code. The irony, however, is that this is not the intent of the quote. Sourced from here, adding a few lines before and after the quote change it to "The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a oneshot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies. There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail."
From above, it can be concluded that if there's a particular code that will get called multiple times (as identified from an analysis and not by guesswork), then working to optimize it is not an improper use of time. Also, see this related discussion. Also note that this discussion lends itself to the "You Aren't Gonna Need It", or YAGNI principle that states a programmer should not add functionality just because they foresee it as being needed. Instead, it should be done only after adding the new functionality is deemed necessary.
Reference: Wikipedia. In traditional programming, it is the custom code that expresses the purpose of the program calls, and to do so, it calls reusable libraries to take care of generic tasks. However, with IoC, it is the generic framework that calls into the custom, or task-specific, code. In IoC, custom-written portions of a computer program receive the flow of control from a generic framework. IoC is used to increase modularity of the program and make it extensible. From a business viewpoint, this applies because what comes first is just soe basic version of the product, to which future enhancements are added as a sub-feature. As example is defining a default browser for your computer/phone. With IoC, you instruct your phone (i.e., the generic framework in the example) to open a web-page, and it does so by passing request to the default browser (i.e., a custom code in the example). In future, you can add/remove broser, change defaults, but doing so still does not break the ability of phone to open a web-page. The ability to install multiple browsers is the "extensibility" that your phone (i.e., the generic framework in the example) gets with IoC.
This principle also lends itself to Gall's law that states "A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system." It reinforces starting with a simple design first and then adding custom code to handle special scenarios. Having a similar flavor is the Keep It Simple, Stupid, or KISS principle, which states that most systems work best if they are kept simple rather than made complicated; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided. However, it should be noted that continuous incremental addition of simple code components done by different team mebers unaware of different portions of overall code base can cause code to be duplicated or be badly/improperly defined in what it intends to achieve. This interferes with the "Black box" and "Code reuse" principle described above. So, having simplicity in codebase comes with an ironical responsibility of progressively making it complex by revisiting definitions and creating hierarchies in the generic framework. This bring in the Rule of Three which is a code refactoring rule of thumb to decide when similar pieces of code should be refactored to avoid duplication. It states that two instances of similar code don't require refactoring, but when similar code is used three times, it should be extracted into a new procedure, i.e., "Three strikes and you refactor".
Before stating with "Loose coupling", let's look at another aspect of "Inversion of Control" (IoC). In the IoC section, it is suggested that a generic framework should call into the custom codes. However, doing so brings in a "chicken or egg" problem. How does the generic framework know what custom codes could get used in future or how to interact individually with them? Or, is it that the custom code is made first and then a generic framework is made to interact with it; And if so, then why the extra work? This twist is the nature of software development. The solution to the problem is that the custom code comes first, and as the codebase grows, it is refactored and broken into a generic framework and specialized codes with the hope that doing so will promote code reuse of the generic framework and make it easier to add new custom features that hook into the generic framework. This is also discussed under the IoC section. "Loose coupling" is the name of that expectation with which the code was refactored to create the generic framework, i.e. any new custom code will use a standard way of interacting with the generic framework, as laid out by that framework, and that it will provide only that small functionality as expected from it by the generic framework and no more, and so, it will not try to interact with any unrelated component. This also implies that the concept of loose coupling comes in limelight when IoC based design is used to have a generic framework and multiple specific codes that interact with it. As a side-note, realize that the encapsulation of data, as defined by object oriented programming paradigm, still holds intact, even with loose coupling getting introduced.
Coming back to loose coupling.. see reference: Wikipedia. Loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. Components in a loosely coupled system can be replaced with alternative implementations that provide the same services. The Law of Demeter, or LoD, also called principle of least knowledge is a design guideline based on notion that an object should assume as little as possible about the structure or properties of anything else (including its subcomponents), in accordance with the principle of "information hiding". It may be viewed as a corollary to the principle of least privilege, which dictates that a module possess only the information and resources necessary for its legitimate purpose.
This principle also lends itself to "Composition over inheritance". Composition over inheritance states that in object oriented programming, a class should achieve polymorphic behavior and code reuse via composition (i.e. by containing instances of other classes that implement the desired functionality) rather than inheritance from a base or parent class. Using inheritance creates a strong coupling between the base and derived class such that changing any functionality in base class changes corresponding function in all derived class in an implicit/hidden manner. Side note: A good way to identify difference between whether to use composition vs inheritance is by checking if a "has-a" or "is-a" relation exists between the derived and the base class. For example: one would say "A laptop has a processor", but "A Dell-laptop is a laptop". In the first example, having a "has-a" relation means that a laptop class should be formed by composing over contained instances, one of which is a processor. In the second example, having a "is-a" relation means that a "Dell laptop" may be formed by extending a "laptop" class. Once again though, realize that doing so creates a strong coupling, and any time the "laptop" definition is changed, it will also change the definition of "Dell laptop". The "is-a" relation, once realized, must be guaranteed to also hold forever. If this is not possible, prefer to stay on using composition. If you want to change a current design that uses inheritance and instead use composition, and want to do so with minimal disruption, then use the "Stimulated Multiple Inheritance" patterns as described in Joshua Bloch's "Effective Java".
In the above example, if each contained instance is modeled as an independent coding concern, then this principle also lends to the idea that a preferable construct for a generic framework capable of handling multiple concerns. The principle suggest that the framework should be loosely coupled to each concern and should be constructed by composing over various concerns which can be independently added/removed. This is the principle of "separation of concerns".
Note how the concept of SOLID captures all coding principles defined in previous section. It is for this reason, the one-line summary for this book suggests users to rely on this mnemonic acronym whenever the find themselves doubtful on how to design and implement a software or a feature within the software.
Software design pattern is a generally accepted and reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design the a product can directly use, nor is it a finished code that can be added directly in the code. Rather, it is a description for how to solve a problem that the community of programmers/developers have identified and also support. It may be viewed as a structured approach to computer programming that is intermediate between the levels of a programming paradigm and a concrete algorithm. By having a design pattern, one adopts a design framework that itself doesn't contain any code, but enables development of future code that is more in line with SOLID principles. It is generally done with the expectation that doing so makes the code modular and testable, maintainable and easier to expand or replace in future. Note that the patterns are dependent on the programing paradigm being used. Most commonly referred design patterns are for object oriented paradigm, and so, they may not be suitable for non-object-oriented languages. Also, it is possible for a programming language to have built in features for a specific problem and it may not be necessary to use the pattern. Some common places to get design patterns are: Design Patterns (book), catalog of multiple patterns, Enterprise integration patterns, tutorialspoint.com, here.
Anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. A few places to get antipatterns, in addition to the Wikipedia page, are at sourcemaking.com and this wiki book.
Reference: article by Martin Fowler, website for "Microservices Patterns" book, Wikipedia, Medium. Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. It is also a good practice to give ownership of a database schema to a single microservice, and any other microservice should only interact with the schema indirectly via interactions with the microservice that owns the schema. In doing so, the microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications. It also enables an organization to independently evolve its technology stack. Needless to say, the topic of microservice architecture goes in much more depth and are covered in the references and elsewhere. It also comes with it own set of design patterns. Some places that cover it are "Microservices Patterns" book and this article at Microsoft.
The 12 factor app development lists 12 suggestion for development and deployment of modern web service. It is strongly suggested to go through each of the suggestions, understand it and incorporate it in practice.
TODO : Agile >> Scrum, Kanban; TODO: #ddd DDD (domain driven design) Domain driven design TODO: Web application framework -- that code architecture is best written in a manner to reuse the design principles of web application framework being usedSee article about web application framework on Wikipedia. A web framework, or web application framework, is a software framework that is designed to support the development of web applications including web services, web resources, and web APIs. These provide a standard way to build and deploy web applications on the world wide web, an most often automate these common activities. For example, many web frameworks provide libraries for database access, templating frameworks, and session management. In doing so, they often promote code reuse, standardize web application design and deployment process which makes it easier to find support and resources, and prevent spending developer time on common boilerplate code. An an example, Django is a popular framework when developing in Python language, and Spring is a popular framework for development in Java language.