Table of Contents
When you have several years of commercial development experience under your belt, interviews shift from testing basic knowledge to assessing the depth of understanding, architectural thinking, and practical experience. For senior developer positions, it's crucial not only to write code but to design it: understanding how decisions impact performance, scalability, readability, and maintainability. Interview questions for experienced professionals go beyond syntax and delve into topics such as memory management, multithreading, asynchrony, protocols, interfaces, metaprogramming, optimization, and typing.
That's why we've compiled a selection of real questions frequently asked in technical interviews at companies hiring mid-level and senior Python developers. Each question is accompanied by a detailed and technically accurate answer to help you not just memorize information but truly understand the logic and essence.
Advanced Python Interview Questions
At the interview stage for an experienced Python developer, key questions assess architectural thinking, knowledge of the language's internal mechanisms, and the ability to design scalable solutions. The advanced section covers topics beyond application code: here, questions involve the GIL, protocols, metaclasses, iterators, thread safety, high-level typing, and code optimization. These topics require not just understanding "how it works" but also explaining "why it's designed that way" and "when to apply it." Such knowledge distinguishes a senior developer from a mid-level specialist.
1. Explain how the GIL (Global Interpreter Lock) works in Python and how it affects multithreading?
The GIL is a synchronization mechanism in the CPython interpreter that allows only one thread to execute Python bytecode at any given time, even if multiple threads are running in the application.
This means that for CPU-bound tasks, multithreading in Python doesn't provide true parallelism, as threads take turns accessing the interpreter, blocking each other and resulting in no performance gain.
However, for I/O-bound tasks (file operations, network requests, databases), the GIL is less of an issue because when a thread is blocked at the OS level, Python releases the GIL, allowing another thread to execute.
For scalable parallel processing of CPU-intensive tasks, the multiprocessing
module is used, which launches separate processes, each with its own interpreter and
GIL.
It's also worth noting that interpreters like Jython and IronPython do not have a GIL, but they don't support all CPython features.
2. How does the import mechanism work in Python? What happens when a module is imported?
When importing a module, Python first checks the sys.modules
cache. If the module has already been loaded, the existing object is returned, and no re-initialization
occurs.
If the module isn't in sys.modules
, Python searches:
- Checks the built-in modules list.
-
Then searches the paths in
sys.path
—starting with the current directory, followed by virtual environment directories, the standard library, and installed packages.
Once the module is found:
- It's compiled to bytecode (if not already compiled),
- The module's code is executed,
- A module object is created,
- It's registered in
sys.modules
.
If the module contains side effects during import, they are executed immediately. Therefore, it's best to place such code under if __name__ == "__main__"
.
Also, note the distinction between absolute and relative imports, lazy imports (from Python 3.10 via importlib.resources
), and that __init__.py
turns a
directory into a package.
3. What are descriptors and how are they used in Python?
A descriptor is an object that manages access to another object's attribute through the implementation of one or more methods: __get__
, __set__
,
__delete__
.
Descriptors allow control over reading, writing, and deleting values. They underpin @property
, @staticmethod
, @classmethod
, as well as
dataclasses
, ORM systems, and other high-level APIs.
There are:
- Non-data descriptors—implement only
__get__
; - Data descriptors—implement both
__get__
and__set__
.
Data descriptors take precedence over instance attributes in __dict__
, making them especially useful for validation, caching, computed values, and access control.
Descriptors work only as class attributes, not as instance attributes.
4. How does super()
work in Python and why use it in modern projects?
super()
is used to call methods from a parent class. It returns a proxy object that allows you to call the next method in the Method Resolution Order (MRO) chain.
This is particularly important in multiple inheritance. Calling super()
ensures that each class in the hierarchy is called exactly once and in the correct order.
The modern syntax super().method()
(without arguments) was introduced in Python 3, simplifying the code and reducing errors.
super()
is used not only in __init__
but also in __str__
, __enter__
, __exit__
, __eq__
, and other
methods where the parent's behavior should be extended rather than completely overridden.
It's important to understand that super()
doesn't always call the immediate parent's method—it follows the MRO chain, so the inheritance order is critical.
5. What is a decorator with parameters and how is it implemented?
A parameterized decorator is a function that takes arguments and returns the actual decorator. It is created in three steps:
-
The outer function receives the parameters.
-
It returns the decorator itself.
-
The decorator, in turn, takes and wraps the target function.
This approach allows you to create customizable wrappers — for example, to limit execution time, log with different levels, enable/disable caching.
It's important to use functools.wraps
to preserve the original function’s name and docstring.
This pattern is commonly used in frameworks (Flask, FastAPI, Django), and in logging, tracking, caching, authorization, and testing libraries.
6. How does the iteration protocol work and how to create a custom iterator?
An iterable object in Python is one that defines the __iter__()
method, which returns an iterator. An iterator is an object with a __next__()
method
that returns the next item in the sequence and raises StopIteration
when there are no more items.
To create a custom iterator, you need to implement:
__iter__()
— returnsself
,__next__()
— returns the next value or raises an exception.
Additionally, you can use generators — a simplified way to create iterators using yield
.
The iteration protocol underpins for
loops, list comprehensions, zip
, map
, filter
, enumerate
, and understanding
it is critical when designing APIs, adapters, and data stream processors.
7. What are __enter__
and __exit__
, and how do context managers work?
A context manager is an object that manages the entry and exit of a with
block. It implements two methods:
__enter__()
— called upon entering thewith
block and can return a resource,__exit__()
— called upon exiting the block, even if an exception occurs. It receives the exception type, value, and traceback.
Context managers automate resource management: closing files, releasing connections, unlocking, rolling back transactions.
Instead of a class, you can use @contextlib.contextmanager
to create a manager using a generator.
This approach is widely used in database APIs, file managers, testing frameworks, and logging systems.
8. What does __call__
do and how can it be applied?
The __call__
method allows an instance of a class to be called like a function. This means the object can behave like a function while maintaining its state.
Applications include:
- wrapper objects (function-like objects);
- caching proxies;
- closures with preserved state;
- configurable functions;
- building DSLs (domain-specific languages).
This approach enhances readability, allows using objects as functional interfaces, and provides architectural flexibility.
9. How does yield from
work and how does it differ from yield
?
yield from
is a construct that delegates the generation of values to another generator or iterable object. It passes not only values but also exceptions and the
return value of the nested generator.
Differences:
yield
manually iterates and returns values;yield from
delegates the entire iteration.
The advantage is cleaner and more readable code with nested generators. It also simplifies handling nested structures, asynchronous generators, and data processing pipelines.
10. How does the serialization mechanism work in Python, and when to choose pickle
, json
, or marshal
?
Serialization is the process of converting an object into a byte stream or string suitable for storage or transmission.
json
— a human-readable format that supports only basic types: strings, numbers, lists, dictionaries. Suitable for data exchange between systems.-
pickle
— serializes almost everything, including functions, classes, instances. Not human-readable, depends on Python version, and unsafe when deserializing untrusted data. marshal
— used for serializing bytecode, faster but unstable between Python versions. Intended for internal use only.
For data transmission — use json
. For storing objects between sessions — use pickle
. For compiling modules (.pyc
) — use
marshal
.
11. What are metaclasses in Python and where are they used?
A metaclass is a class whose instances are other classes. In other words, a metaclass controls the creation of classes, just as a regular class controls the creation of instances.
By default, a class is created using type
. If you specify metaclass=CustomMeta
, then when creating the class, Python will call
CustomMeta.__new__()
and CustomMeta.__init__()
, passing the class name, base classes, and attribute dictionary.
Metaclasses allow you to:
- automatically modify classes upon creation (e.g., add attributes, register them in a registry);
- inject validation logic;
- implement design patterns like Singleton, Factory, Registry.
This is a powerful tool in developing frameworks, ORMs, APIs, and plugin systems where controlling behavior at the architectural level is important.
12. What is the difference between abstract base classes (ABC) and interfaces?
Python does not have a built-in concept of "interface" like Java but provides an analog — abstract base classes (ABC) through the abc
module.
A class inheriting from abc.ABC
can contain methods marked with @abstractmethod
. Such methods are mandatory to implement in child
classes. Attempting to instantiate a class that does not implement all abstract methods will raise a TypeError
.
Differences:
- Interfaces (in other languages) define only the signature;
- ABC in Python can contain both implementation and default behavior.
This allows applying SOLID principles, especially Liskov and Interface Segregation, in Python projects.
13. How is the caching mechanism implemented in Python and when to use functools.cache
vs lru_cache
?
functools.lru_cache
is a decorator used to cache function results. It stores a limited number of unique calls, removing the "oldest" ones when the limit is exceeded.
Starting from version 3.9, functools.cache
was introduced — it's a simplified lru_cache
without a limit, effectively
lru_cache(maxsize=None)
.
When to use:
cache
— for caching pure functions without the risk of memory overflow (e.g., Fibonacci calculation);lru_cache
— when millions of unique calls are possible and memory usage must be restricted.
Both decorators improve performance and are especially useful in numerical computations, API wrappers, data validation, and expensive transformations.
14. Explain the use of typing.Annotated
, Final
, Literal
, Protocol
— what are they and why are they useful?
The typing
module provides support for static type checking in Python.
-
Annotated[T, ...]
— enriches typeT
with additional metadata. Used for documentation, validation, and framework integrations (FastAPI, Pydantic). Final
— marks an attribute or method as "not to be overridden" (similar tofinal
in Java). Supported by type checkers (mypy, pyright).Literal
— restricts a value to specific literals (Literal['asc', 'desc']
).-
Protocol
— allows defining an interface using "duck typing": a class conforms to aProtocol
if it has the required attributes/methods, without inheritance.
All of these constructs make the code safer, more self-documenting, and easier to scale.
15. How is lazy evaluation implemented in Python?
Lazy evaluation means delaying the execution of a computation until its result is actually needed. In Python, this is achieved via:
- generators (
yield
), which return items on demand; - iterable objects (
__iter__
+__next__
); -
iterator functions like
map
,filter
,zip
,range
— they return iterators instead of building collections; -
@property
— computes the value only when accessed; -
functools.lru_cache
— stores results after the first call; -
lazy_object_proxy
— wraps an object that is lazily loaded.
Lazy evaluation helps reduce memory usage, improve performance, and build data processing pipelines.
16. What is monkey patching and when is it justified to use it?
Monkey patching means dynamically changing the behavior of existing code (usually third-party) at runtime. For example, replacing a method, adding logging, or fixing a bug without modifying the original source code.
Use cases:
- temporary workarounds for bugs in third-party libraries;
- unit tests to replace dependent behaviors;
- injecting logic into uncontrolled code (e.g., legacy APIs).
Risks of monkey patching:
- may cause unpredictable behavior;
- difficult to debug;
- unstable with library updates.
Monkey patching should be used only in isolation, must be documented, and ideally replaced with official extensibility mechanisms.
17. How does closure work in Python?
A closure is a function that retains access to variables from an enclosing (non-global) scope even after that outer function has finished executing.
Key point: the inner function "captures" variable references, not copies. The variables are stored in __closure__
, and accessible via cell_contents
.
Use cases:
- retaining state between calls without a class;
- function factories with pre-configured parameters;
- implementing delayed execution;
- decorators.
If you need to modify a captured variable, use the nonlocal
keyword.
18. What is the difference between deepcopy
, copy
, serialization, and object cloning?
copy.copy()
— creates a shallow copy: the container is copied, but nested objects remain the same.copy.deepcopy()
— creates a deep, recursive copy of all levels of the object.- Serialization (
pickle
,json
) — saves the object as a string or bytes for storage/transmission, and restores it viaload
.
Additionally:
- Cloning — an OOP term, often implemented via
__copy__
/__deepcopy__
in custom classes. dataclasses.replace()
— partial cloning of adataclass
.
It’s important to choose the approach based on the task: memory copy — copy
, caching — pickle
, export — json
.
19. How do you implement a thread-safe data structure or component in Python?
To ensure thread safety, use:
threading.Lock()
— for sections accessing shared data.Queue.Queue
— built-in thread-safe queue.threading.RLock
,Condition
,Semaphore
— for complex synchronization.collections.deque
withmaxlen
— thread-safe forappend/pop
from opposite ends.
If an object must be fully thread-safe, wrap it such that:
- each method runs inside a
with lock
block; - atomic operations are used where possible (e.g.,
+=
on numbers is not atomic); - state is modified in a single thread or via queues.
Also consider using concurrent.futures.ThreadPoolExecutor
or moving computation to separate processes.
20. What is dependency injection and how is it implemented in Python?
Dependency Injection (DI) is a pattern where an object’s dependencies are passed from the outside rather than created inside. This improves testability, flexibility, and separation of concerns.
In Python, DI can be implemented via:
- constructor or function arguments;
- factory functions;
- context managers or
with
blocks; - external containers (
injector
,punq
,wired
); - manual injection — passing dependencies via
__init__
or callingset_dependency
.
Example: instead of creating a database connection inside a class, it is passed as an argument.
DI is especially valuable in testing (dependencies can be mocked) and in scalable codebases that use inversion of control (IoC).