This is the first article in a short series dedicated to Design Patterns in Python.
Creational Design Patterns
Creational Design Patterns, as the name implies, deal with the creation of classes or objects.
They serve to abstract away the specifics of classes so that we'd be less dependent on their exact implementation, or so that we wouldn't have to deal with complex construction whenever we need them, or so we'd ensure some special instantiation properties.
They're very useful for lowering the level of dependency between our classes and controlling how the user interacts with them as well.
The design patterns covered in this article are:
Say you're making software for an insurance company which offers insurance to people who're employed full-time. You've made the application using a class called
However, the client decides to expand their business and will now provide their services to unemployed people as well, albeit with different procedures and conditions.
Now you have to make an entirely new class for the unemployed, which will take a completely different constructor! But now you don't know which constructor to call in a general case, much less which arguments to pass to it.
You can have some ugly conditionals all over your code where every constructor invocation is surrounded by
if statements, and you use some possibly expensive operation to check the type of the object itself.
If there are errors during initialization, they're caught and the code is edited to do that at every of the hundred places the constructors are used at.
Without stressing it out to you, you're well aware that this approach is less than desirable, non-scalable and all-around unsustainable.
Alternatively, you could consider the Factory Pattern.
Factories are used to encapsulate the information about classes we're using, while instantiating them based on certain parameters we provide them with.
By using a factory, we can switch out an implementation with another by simply changing the parameter that was used to decide the original implementation in the first place.
This decouples the implementation from the usage in such a way that we can easily scale the application by adding new implementations and simply instantiating them through the factory - with the exact same codebase.
If we just get another factory as a parameter, we don't even need to know which class it produces. We just need to have a uniform factory method which returns a class guaranteed to have a certain set of behaviors. Let's take a look.
For starters, don't forget to include abstract methods:
from abc import ABC, abstractmethod
We need our produced classes to implement some set of methods which enable us to work with them uniformly. For that purpose, we implement the following interface:
class Product(ABC): @abstractmethod def calculate_risk(self): pass
And now we inherit from it through a
class Worker(Product): def __init__(self, name, age, hours): self.name = name self.age = age self.hours = hours def calculate_risk(self): # Please imagine a more plausible implementation return self.age + 100/self.hours def __str__(self): return self.name+" ["+str(self.age)+"] - "+str(self.hours)+"h/week" class Unemployed(Product): def __init__(self, name, age, able): self.name = name self.age = age self.able = able def calculate_risk(self): # Please imagine a more plausible implementation if self.able: return self.age+10 else: return self.age+30 def __str__(self): if self.able: return self.name+" ["+str(self.age)+"] - able to work" else: return self.name+" ["+str(self.age)+"] - unable to work"
Now that we have our people, let's make their factory:
class PersonFactory: def get_person(self, type_of_person): if type_of_person == "worker": return Worker("Oliver", 22, 30) if type_of_person == "unemployed": return Unemployed("Sophie", 33, False)
Here, we've hardcoded the parameters for clarity, though typically you'd just instantiate the class and have it do its thing.
To test out how all of this works, let's instantiate our factory and let it produce a couple of people:
factory = PersonFactory() product = factory.get_person("worker") print(product) product2 = factory.get_person("unemployed") print(product2)
Oliver  - 30h/week Sophie  - unable to work
You need to create a family of different objects. Although they're different, they're somehow grouped together by a certain trait.
For example, you may need to create a main course and a dessert at an Italian and a French restaurant, but you won't mix one cuisine with the other.
The idea is very similar to the normal Factory Pattern, the only difference being that all of the factories have multiple separate methods for creating objects, and the kind of factory is what determines the family of objects.
An abstract factory is responsible for the creation of entire groups of objects, alongside their respective factories - but it doesn't concern itself with the concrete implementations of these objects. That part is left for their respective factories:
from abc import ABC, abstractmethod class Product(ABC): @abstractmethod def cook(self): pass class FettuccineAlfredo(Product): name = "Fettuccine Alfredo" def cook(self): print("Italian main course prepared: "+self.name) class Tiramisu(Product): name = "Tiramisu" def cook(self): print("Italian dessert prepared: "+self.name) class DuckALOrange(Product): name = "Duck À L'Orange" def cook(self): print("French main course prepared: "+self.name) class CremeBrulee(Product): name = "Crème brûlée" def cook(self): print("French dessert prepared: "+self.name) class Factory(ABC): @abstractmethod def get_dish(type_of_meal): pass class ItalianDishesFactory(Factory): def get_dish(type_of_meal): if type_of_meal == "main": return FettuccineAlfredo() if type_of_meal == "dessert": return Tiramisu() def create_dessert(self): return Tiramisu() class FrenchDishesFactory(Factory): def get_dish(type_of_meal): if type_of_meal == "main": return DuckALOrange() if type_of_meal == "dessert": return CremeBrulee() class FactoryProducer: def get_factory(self, type_of_factory): if type_of_factory == "italian": return ItalianDishesFactory if type_of_factory == "french": return FrenchDishesFactory
We can test the results by creating both factories and calling respective
cook() methods on all objects:
fp = FactoryProducer() fac = fp.get_factory("italian") main = fac.get_dish("main") main.cook() dessert = fac.get_dish("dessert") dessert.cook() fac1 = fp.get_factory("french") main = fac1.get_dish("main") main.cook() dessert = fac1.get_dish("dessert") dessert.cook()
Italian main course prepared: Fettuccine Alfredo Italian dessert prepared: Tiramisu French main course prepared: Duck À L'Orange French dessert prepared: Crème brûlée
You need to represent a robot with your object structure. The robot can be humanoid with four limbs and upward standing, or it can be animal-like with a tail, wings, etc.
It can use wheels to move, or it can use helicopter blades. It can use cameras, an infrared detection module... you get the picture.
Imagine the constructor for this thing:
def __init__(self, left_leg, right_leg, left_arm, right_arm, left_wing, right_wing, tail, blades, cameras, infrared_module, #... ): self.left_leg = left_leg if left_leg == None: bipedal = False self.right_leg = right_leg self.left_arm = left_arm self.right_arm = right_arm # ...
Instantiating this class would be extremely unreadable, it would be very easy to get some of the argument types wrong since we're working in Python and piling up countless arguments in a constructor is hard to manage.
Also, what if we don't want the robot to implement all the fields within the class? What if we want it to only have legs instead of having both legs and wheels?
Python doesn't support overloading constructors, which would help us define such cases (and even if we could, it would only lead to even more messy constructors).
We can make a Builder class that constructs our object and adds appropriate modules to our robot. Instead of a convoluted constructor, we can instantiate an object and add the needed components using functions.
We call the construction of each module separately, after instantiating the object. Let's go ahead and define a
Robot with some default values:
class Robot: def __init__(self): self.bipedal = False self.quadripedal = False self.wheeled = False self.flying = False self.traversal =  self.detection_systems =  def __str__(self): string = "" if self.bipedal: string += "BIPEDAL " if self.quadripedal: string += "QUADRIPEDAL " if self.flying: string += "FLYING ROBOT " if self.wheeled: string += "ROBOT ON WHEELS\n" else: string += "ROBOT\n" if self.traversal: string += "Traversal modules installed:\n" for module in self.traversal: string += "- " + str(module) + "\n" if self.detection_systems: string += "Detection systems installed:\n" for system in self.detection_systems: string += "- " + str(system) + "\n" return string class BipedalLegs: def __str__(self): return "two legs" class QuadripedalLegs: def __str__(self): return "four legs" class Arms: def __str__(self): return "four legs" class Wings: def __str__(self): return "wings" class Blades: def __str__(self): return "blades" class FourWheels: def __str__(self): return "four wheels" class TwoWheels: def __str__(self): return "two wheels" class CameraDetectionSystem: def __str__(self): return "cameras" class InfraredDetectionSystem: def __str__(self): return "infrared"
Notice that we've omitted specific initializations in the constructor, and used default values instead. This is because we'll use the Builder classes to initialize these values.
First, we implement an abstract Builder which defines our interface for building:
from abc import ABC, abstractmethod class RobotBuilder(ABC): @abstractmethod def reset(self): pass @abstractmethod def build_traversal(self): pass @abstractmethod def build_detection_system(self): pass
Now we can implement multiple kinds of Builders that obey this interface, for instance for an android, and for an autonomous car:
class AndroidBuilder(RobotBuilder): def __init__(self): self.product = Robot() def reset(self): self.product = Robot() def get_product(self): return self.product def build_traversal(self): self.product.bipedal = True self.product.traversal.append(BipedalLegs()) self.product.traversal.append(Arms()) def build_detection_system(self): self.product.detection_systems.append(CameraDetectionSystem()) class AutonomousCarBuilder(RobotBuilder): def __init__(self): self.product = Robot() def reset(self): self.product = Robot() def get_product(self): return self.product def build_traversal(self): self.product.wheeled = True self.product.traversal.append(FourWheels()) def build_detection_system(self): self.product.detection_systems.append(InfraredDetectionSystem())
Notice how they implement the same methods, but there's an inherently different structure of objects underneath, and the end user doesn't need to deal with particulars of that structure?
Of course, we could make a
Robot which can have both legs and wheels, and the user would have to add each one separately, but we can also make very specific builders which add only one appropriate module for each "part".
Let's try out using an
AndroidBuilder to build an android:
builder = AndroidBuilder() builder.build_traversal() builder.build_detection_system() print(builder.get_product())
Running this code will yield:
BIPEDAL ROBOT Traversal modules installed: - two legs - four legs Detection systems installed: - cameras
And now, let's use an
AutonomousCarBuilder to build a car:
builder = AutonomousCarBuilder() builder.build_traversal() builder.build_detection_system() print(builder.get_product())
Running this code will yield:
ROBOT ON WHEELS Traversal modules installed: - four wheels Detection systems installed: - infrared
The initialization is a lot more clean and readable compared to the messy constructor from before and we have the flexibility of adding the modules we want.
If the fields in our product use relatively standard constructors, we can even make a so-called Director to manage the particular builders:
class Director: def make_android(self, builder): builder.build_traversal() builder.build_detection_system() return builder.get_product() def make_autonomous_car(self, builder): builder.build_traversal() builder.build_detection_system() return builder.get_product() director = Director() builder = AndroidBuilder() print(director.make_android(builder))
Running this piece of code will yield:
BIPEDAL ROBOT Traversal modules installed: - two legs - four legs Detection systems installed: - cameras
That being said, the Builder pattern doesn't make much sense on small, simple classes as the added logic for building them just adds more complexity.
Though, when it comes to big, complicated classes with numerous fields, such as multi-layer neural networks - the Builder pattern is a life saver.
We need to clone an object, but may not know its exact type, parameters, they may not all be assigned through the constructor itself or may depend on system state at a particular point during the runtime.
If we try to do it directly we'll add a lot of dependencies branching in our code, and it may not even work at the end.
The Prototype design pattern addresses the problem of copying objects by delegating it to the objects themselves. All objects that are copyable must implement a method called
clone and use it to return exact copies of themselves.
Let's go ahead and define a common
clone function for all the child-classes and then inherit it from the parent class:
from abc import ABC, abstractmethod class Prototype(ABC): def clone(self): pass class MyObject(Prototype): def __init__(self, arg1, arg2): self.field1 = arg1 self.field2 = arg2 def __operation__(self): self.performed_operation = True def clone(self): obj = MyObject(self.field1, field2) obj.performed_operation = self.performed_operation return obj
Alternatively, you can use the
deepcopy function instead of simply assigning fields like in the previous example:
class MyObject(Prototype): def __init__(self, arg1, arg2): self.field1 = arg1 self.field2 = arg2 def __operation__(self): self.performed_operation = True def clone(self): return deepcopy(self)
The Prototype pattern can be really useful in large-scale applications that instantiate a lot of objects. Sometimes, copying an already existing object is less costly than instantiating a new one.
A Singleton is an object with two main characteristics:
- It can have at most one instance
- It should be globally accessible in the program
These properties are both important, although in practice you'll often hear people calling something a Singleton even if it has only one of these properties.
Having only one instance is usually a mechanism for controlling access to some shared resource. For example, two threads may work with the same file, so instead of both opening it separately, a Singleton can provide a unique access point to both of them.
Global accessibility is important because after your class has been instantiated once, you'd need to pass that single instance around in order to work with it. It can't be instantiated again. That's why it's easier to make sure that whenever you try to instantiate the class again, you just get the same instance you've already had.
Let's go ahead and implement the Singleton pattern by making an object globally accessible and limited to a single instance:
from typing import Optional class MetaSingleton(type): _instance : Optional[type] = None def __call__(cls, *args, **kwargs): if cls._instance is None: cls._instance = super(MetaSingleton, cls).__call__(*args, **kwargs) return cls._instance class BaseClass: field = 5 class Singleton(BaseClass, metaclass=MetaSingleton): pass
Optional here is a data type which can contain either a class stated in
__call__ method allows you to use instances of the class as functions. The method is also called during initialization, so when we call something like
a = Singleton() under the hood it will call its base class'
In Python, everything is an object. That includes classes. All of the usual classes you write, as well as the standard classes, have
type as their object type. Even
type is of type
What this means is that
type is a metaclass - other classes are instances of
type, just like variable objects are instances of those classes. In our case,
Singleton is an instance of
All of this means that our
__call__ method will be called whenever a new object is created and it will provide a new instance if we haven't already initialized one. If we have, it will just return the already initialized instance.
super(MetaSingleton, cls).__call__(*args, **kwargs) calls the super class'
__call__. Our super class in this case is
type, which has a
__call__ implementation that will perform initialization with the given arguments.
We've specified our type (
MetaSingleton), value to be assigned to the
_instance field (
cls) and other arguments we may be passing.
The purpose of using a metaclass in this case rather than a simpler implementation is essentially the ability to reuse the code.
We derived one class from it in this case, but if we needed another Singleton for another purpose we could just derive the same metaclass instead of implementing essentially the same thing.
Now we can try using it:
a = Singleton() b = Singleton() a == b
Because of its global access point, it's wise to integrate thread-safety into Singleton. Luckily, we don't have to edit it too much to do that. We can simply edit
def __call__(cls, *args, **kwargs): with cls._lock: if not cls._instance: cls._instance = super().__call__(*args, **kwargs) return cls._instance
This way, if two threads start to instantiate the Singleton at the same time, one will stop at the lock. When the context manager releases the lock, the other one will enter the
if statement and see that the instance has indeed already been created by the other thread.
We have a class in our project, let's call it
MyClass is very useful and is often used throughout the project, albeit for short periods of time.
Its instantiation and initialization are very expensive, however, and our program runs very slowly because it constantly needs to make new instances just to use them for a few operations.
We'll make a pool of objects that will be instantiated when we create the pool itself. Whenever we need to use the object of type
MyClass, we'll acquire it from the pool, use it, and then release it back into the pool to be used again.
If the object has some sort of default starting state, releasing will always restart it to it. If the pool is left empty, we'll initialize a new object for the user, but when the user is finished with it they'll release it back into the pool to be used again.
Let's go ahead and first define
class MyClass: # Return the resource to default setting def reset(self): self.setting = 0 class ObjectPool: def __init__(self, size): self.objects = [MyClass() for _ in range(size)] def acquire(self): if self.objects: return self.objects.pop() else: self.objects.append(MyClass()) return self.objects.pop() def release(self, reusable): reusable.reset() self.objects.append(reusable)
And to test it out:
pool = ObjectPool(10) reusable = pool.acquire() pool.release(reusable)
Note that this is a bare-bones implementation and that in practice this pattern may be used together with Singleton to provide a single globally accessible pool.
Note that utility of this pattern is disputed in languages which use the garbage collector.
Allocation of objects that take up only memory (meaning no external resources) tends to be relatively inexpensive in such languages, while a lot of "live" references to objects can slow down garbage collection because GC goes through all of the references.
With this, we have covered most important Creational Design Patterns in Python - the problems they solve and how they solve them.
Being familiar with design patterns is an extremely handy skill-set for all developers as they provide solutions to common problems encountered in programming.
Being aware of both the motivations and solutions, you can also avoid accidentally coming up with an anti-pattern while trying to solve a problem.