This section details all the options available to Mappers, as well as advanced patterns.
To start, heres the tables we will work with again:
from sqlalchemy import * metadata = MetaData() # a table to store users users_table = Table('users', metadata, Column('user_id', Integer, primary_key = True), Column('user_name', String(40)), Column('password', String(80)) ) # a table that stores mailing addresses associated with a specific user addresses_table = Table('addresses', metadata, Column('address_id', Integer, primary_key = True), Column('user_id', Integer, ForeignKey("users.user_id")), Column('street', String(100)), Column('city', String(80)), Column('state', String(2)), Column('zip', String(10)) ) # a table that stores keywords keywords_table = Table('keywords', metadata, Column('keyword_id', Integer, primary_key = True), Column('name', VARCHAR(50)) ) # a table that associates keywords with users userkeywords_table = Table('userkeywords', metadata, Column('user_id', INT, ForeignKey("users")), Column('keyword_id', INT, ForeignKey("keywords")) )
When mappers are constructed, by default the column names in the Table metadata are used as the names of attributes on the mapped class. This can be customzed within the properties by stating the key/column combinations explicitly:
user_mapper = mapper(User, users_table, properties={ 'id' : users_table.c.user_id, 'name' : users_table.c.user_name, })
In the situation when column names overlap in a mapper against multiple tables, columns may be referenced together with a list:
# join users and addresses usersaddresses = sql.join(users_table, addresses_table, users_table.c.user_id == addresses_table.c.user_id) m = mapper(User, usersaddresses, properties = { 'id' : [users_table.c.user_id, addresses_table.c.user_id], } )
A common request is the ability to create custom class properties that override the behavior of setting/getting an attribute. Currently, the easiest way to do this in SQLAlchemy is how it would be done in any Python program; define your attribute with a different name, such as "_attribute", and use a property to get/set its value. The mapper just needs to be told of the special name:
class MyClass(object): def _set_email(self, email): self._email = email def _get_email(self, email): return self._email email = property(_get_email, _set_email) mapper(MyClass, mytable, properties = { # map the '_email' attribute to the "email" column # on the table '_email': mytable.c.email })
In a later release, SQLAlchemy will also allow _get_email
and _set_email
to be attached directly to the "email" property created by the mapper, and
will also allow this association to occur via decorators.
Feature Status: Alpha API
A one-to-many or many-to-many relationship results in a list-holding element being attached to all instances of a class. Currently, this list is an instance of sqlalchemy.util.HistoryArraySet
, is a UserDict
instance that decorates an underlying list object. The implementation of this list can be controlled, and can in fact be any object that implements a list
-style append
and __iter__
method. A common need is for a list-based relationship to actually be a dictionary. This can be achieved by subclassing dict
to have list
-like behavior.
In this example, a class MyClass
is defined, which is associated with a parent object MyParent
. The collection of MyClass
objects on each MyParent
object will be a dictionary, storing each MyClass
instance keyed to its name
attribute.
# a class to be stored in the list class MyClass(object): def __init__(self, name): self.name = name # create a dictionary that will act like a list, and store # instances of MyClass class MyDict(dict): def append(self, item): self[item.name] = item def __iter__(self): return self.values() # parent class class MyParent(object): # this class-level attribute provides the class to be # used by the 'myclasses' attribute myclasses = MyDict # mappers, constructed normally mapper(MyClass, myclass_table) mapper(MyParent, myparent_table, properties={ 'myclasses' : relation(MyClass) }) # elements on 'myclasses' can be accessed via string keyname myparent = MyParent() myparent.myclasses.append(MyClass('this is myclass')) myclass = myparent.myclasses['this is myclass']
When creating relations on a mapper, most examples so far have illustrated the mapper and relationship joining up based on the foreign keys of the tables they represent. in fact, this "automatic" inspection can be completely circumvented using the primaryjoin
and secondaryjoin
arguments to relation
, as in this example which creates a User object which has a relationship to all of its Addresses which are in Boston:
class User(object): pass class Address(object): pass mapper(Address, addresses_table) mapper(User, users_table, properties={ 'boston_addreses' : relation(Address, primaryjoin= and_(users_table.c.user_id==Address.c.user_id, Addresses.c.city=='Boston')) })
Many to many relationships can be customized by one or both of primaryjoin
and secondaryjoin
, shown below with just the default many-to-many relationship explicitly set:
class User(object): pass class Keyword(object): pass mapper(Keyword, keywords_table) mapper(User, users_table, properties={ 'keywords':relation(Keyword, secondary=userkeywords_table primaryjoin=users_table.c.user_id==userkeywords_table.c.user_id, secondaryjoin=userkeywords_table.c.keyword_id==keywords_table.c.keyword_id ) })
The previous example leads in to the idea of joining against the same table multiple times. Below is a User object that has lists of its Boston and New York addresses:
mapper(User, users_table, properties={ 'boston_addreses' : relation(Address, primaryjoin= and_(users_table.c.user_id==Address.c.user_id, Addresses.c.city=='Boston')), 'newyork_addresses' : relation(Address, primaryjoin= and_(users_table.c.user_id==Address.c.user_id, Addresses.c.city=='New York')), })
Both lazy and eager loading support multiple joins equally well.
This feature allows particular columns of a table to not be loaded by default, instead being loaded later on when first referenced. It is essentailly "column-level lazy loading". This feature is useful when one wants to avoid loading a large text or binary field into memory when its not needed. Individual columns can be lazy loaded by themselves or placed into groups that lazy-load together.
book_excerpts = Table('books', db, Column('book_id', Integer, primary_key=True), Column('title', String(200), nullable=False), Column('summary', String(2000)), Column('excerpt', String), Column('photo', Binary) ) class Book(object): pass # define a mapper that will load each of 'excerpt' and 'photo' in # separate, individual-row SELECT statements when each attribute # is first referenced on the individual object instance mapper(Book, book_excerpts, properties = { 'excerpt' : deferred(book_excerpts.c.excerpt), 'photo' : deferred(book_excerpts.c.photo) })
Deferred columns can be placed into groups so that they load together:
book_excerpts = Table('books', db, Column('book_id', Integer, primary_key=True), Column('title', String(200), nullable=False), Column('summary', String(2000)), Column('excerpt', String), Column('photo1', Binary), Column('photo2', Binary), Column('photo3', Binary) ) class Book(object): pass # define a mapper with a 'photos' deferred group. when one photo is referenced, # all three photos will be loaded in one SELECT statement. The 'excerpt' will # be loaded separately when it is first referenced. mapper(Book, book_excerpts, properties = { 'excerpt' : deferred(book_excerpts.c.excerpt), 'photo1' : deferred(book_excerpts.c.photo1, group='photos'), 'photo2' : deferred(book_excerpts.c.photo2, group='photos'), 'photo3' : deferred(book_excerpts.c.photo3, group='photos') })
Keyword options to the relation
function include:
ForeignKey
pointing to the other column in an equality expression. Specifying it here can override the normal foreign key properties of the join condition, which is useful for self-referential table relationships, join conditions where a ForeignKey
is not present, or where the same column might appear on both sides of the join condition.
private=True
is the equivalent of setting cascade="all, delete-orphan"
, and indicates the lifecycle of child objects should be contained within that of the parent. See the example in datamapping_relations_cycle.
backref()
construct for more configurability. See Backreferences.
By default, mappers will attempt to ORDER BY the "oid" column of a table, or the primary key column, when selecting rows. This can be modified in several ways.
The "order_by" parameter can be sent to a mapper, overriding the per-engine ordering if any. A value of None means that the mapper should not use any ordering. A non-None value, which can be a column, an asc
or desc
clause, or an array of either one, indicates the ORDER BY clause that should be added to all select queries:
# disable all ordering mapper = mapper(User, users_table, order_by=None) # order by a column mapper = mapper(User, users_table, order_by=users_tableusers_table.c.user_id) # order by multiple items mapper = mapper(User, users_table, order_by=[users_table.c.user_id, desc(users_table.c.user_name)])
"order_by" can also be specified to an individual select
method, overriding all other per-engine/per-mapper orderings:
# order by a column l = mapper.select(users_table.c.user_name=='fred', order_by=users_table.c.user_id) # order by multiple criterion l = mapper.select(users_table.c.user_name=='fred', order_by=[users_table.c.user_id, desc(users_table.c.user_name)])
For relations, the "order_by" property can also be specified to all forms of relation:
# order address objects by address id mapper = mapper(User, users_table, properties = { 'addresses' : relation(mapper(Address, addresses_table), order_by=addresses_table.c.address_id) }) # eager load with ordering - the ORDER BY clauses of parent/child will be organized properly mapper = mapper(User, users_table, properties = { 'addresses' : relation(mapper(Address, addresses_table), order_by=desc(addresses_table.c.email_address), eager=True) }, order_by=users_table.c.user_id)
You can limit rows in a regular SQL query by specifying limit
and offset
. A Mapper can handle the same concepts:
class User(object): pass mapper(User, users_table) sqlr = session.query(User).select(limit=20, offset=10)
However, things get tricky when dealing with eager relationships, since a straight LIMIT of rows does not represent the count of items when joining against other tables to load related items as well. So here is what SQLAlchemy will do when you use limit or offset with an eager relationship:
class User(object): pass class Address(object): pass mapper(User, users_table, properties={ 'addresses' : relation(mapper(Address, addresses_table), lazy=False) }) r = session.query(User).select(User.c.user_name.like('F%'), limit=20, offset=10)
The main WHERE clause as well as the limiting clauses are coerced into a subquery; this subquery represents the desired result of objects. A containing query, which handles the eager relationships, is joined against the subquery to produce the result.
The options
method on the Query
object, first introduced in None, produces a new Query
object by creating a copy of the underlying Mapper
and placing modified properties on it. The options
method is also directly available off the Mapper
object itself, so that the newly copied Mapper
can be dealt with directly. The options
method takes a variable number of MapperOption
objects which know how to change specific things about the mapper. The five available options are eagerload
, lazyload
, noload
, deferred
and extension
.
An example of a mapper with a lazy load relationship, upgraded to an eager load relationship:
class User(object): pass class Address(object): pass # a 'lazy' relationship mapper(User, users_table, properties = { 'addreses':relation(mapper(Address, addresses_table), lazy=True) }) # copy the mapper and convert 'addresses' to be eager eagermapper = class_mapper(User).options(eagerload('addresses'))
The defer
and undefer
options can control the deferred loading of attributes:
# set the 'excerpt' deferred attribute to load normally m = book_mapper.options(undefer('excerpt')) # set the referenced mapper 'photos' to defer its loading of the column 'imagedata' m = book_mapper.options(defer('photos.imagedata'))
Feature Status: Alpha Implementation
Inheritance in databases comes in three forms: single table inheritance, where several types of classes are stored in one table, concrete table inheritance, where each type of class is stored in its own table, and multiple table inheritance, where the parent/child classes are stored in their own tables that are joined together in a select.
There is also a concept of polymorphic
loading, which indicates if multiple kinds of classes can be loaded in one pass.
SQLAlchemy supports all three kinds of inheritance. Additionally, true polymorphic
loading is supported in a straightfoward way for single table inheritance, and has some more manually-configured features that can make it happen for concrete and multiple table inheritance.
Working examples of polymorphic inheritance come with the distribution in the directory examples/polymorphic
.
Here are the classes we will use to represent an inheritance relationship:
class Employee(object): def __init__(self, name): self.name = name def __repr__(self): return self.__class__.__name__ + " " + self.name class Manager(Employee): def __init__(self, name, manager_data): self.name = name self.manager_data = manager_data def __repr__(self): return self.__class__.__name__ + " " + self.name + " " + self.manager_data class Engineer(Employee): def __init__(self, name, engineer_info): self.name = name self.engineer_info = engineer_info def __repr__(self): return self.__class__.__name__ + " " + self.name + " " + self.engineer_info
Each class supports a common name
attribute, while the Manager
class has its own attribute manager_data
and the Engineer
class has its own attribute engineer_info
.
This will support polymorphic loading via the Employee
mapper.
employees_table = Table('employees', metadata, Column('employee_id', Integer, primary_key=True), Column('name', String(50)), Column('manager_data', String(50)), Column('engineer_info', String(50)), Column('type', String(20)) ) employee_mapper = mapper(Employee, employees_table, polymorphic_on=employees_table.c.type) manager_mapper = mapper(Manager, inherits=employee_mapper, polymorphic_identity='manager') engineer_mapper = mapper(Engineer, inherits=employee_mapper, polymorphic_identity='engineer')
Without polymorphic loading, you just define a separate mapper for each class.
managers_table = Table('managers', metadata, Column('employee_id', Integer, primary_key=True), Column('name', String(50)), Column('manager_data', String(50)), ) engineers_table = Table('engineers', metadata, Column('employee_id', Integer, primary_key=True), Column('name', String(50)), Column('engineer_info', String(50)), ) manager_mapper = mapper(Manager, managers_table) engineer_mapper = mapper(Engineer, engineers_table)
With polymorphic loading, the SQL query to do the actual polymorphic load must be constructed, usually as a UNION. There is a helper function to create these UNIONS called polymorphic_union
.
pjoin = polymorphic_union({ 'manager':managers_table, 'engineer':engineers_table }, 'type', 'pjoin') employee_mapper = mapper(Employee, pjoin, polymorphic_on=pjoin.c.type) manager_mapper = mapper(Manager, managers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='manager') engineer_mapper = mapper(Engineer, engineers_table, inherits=employee_mapper, concrete=True, polymorphic_identity='engineer')
A future release of SQLALchemy might better merge the generated UNION into the mapper construction phase.
Like concrete table inheritance, this can be done non-polymorphically, or with a little more complexity, polymorphically:
people = Table('people', metadata, Column('person_id', Integer, primary_key=True), Column('name', String(50)), Column('type', String(30))) engineers = Table('engineers', metadata, Column('person_id', Integer, ForeignKey('people.person_id'), primary_key=True), Column('engineer_info', String(50)), ) managers = Table('managers', metadata, Column('person_id', Integer, ForeignKey('people.person_id'), primary_key=True), Column('manager_data', String(50)), ) person_mapper = mapper(Person, people) mapper(Engineer, engineers, inherits=person_mapper) mapper(Manager, managers, inherits=person_mapper)
Polymorphic:
person_join = polymorphic_union( { 'engineer':people.join(engineers), 'manager':people.join(managers), 'person':people.select(people.c.type=='person'), }, None, 'pjoin') person_mapper = mapper(Person, people, select_table=person_join, polymorphic_on=person_join.c.type, polymorphic_identity='person') mapper(Engineer, engineers, inherits=person_mapper, polymorphic_identity='engineer') mapper(Manager, managers, inherits=person_mapper, polymorphic_identity='manager')
The join condition in a multiple table inheritance relationship can be specified explicitly, using inherit_condition
:
AddressUser.mapper = mapper( AddressUser, addresses_table, inherits=User.mapper, inherit_condition=users_table.c.user_id==addresses_table.c.user_id )
Mappers can be constructed against arbitrary relational units (called Selectables
) as well as plain Tables
. For example, The join
keyword from the SQL package creates a neat selectable unit comprised of multiple tables, complete with its own composite primary key, which can be passed in to a mapper as the table.
# a class class AddressUser(object): pass # define a Join j = join(users_table, addresses_table) # map to it - the identity of an AddressUser object will be # based on (user_id, address_id) since those are the primary keys involved m = mapper(AddressUser, j, properties={ 'user_id':[users_table.c.user_id, addresses_table.c.user_id] })
A second example:
# many-to-many join on an association table j = join(users_table, userkeywords, users_table.c.user_id==userkeywords.c.user_id).join(keywords, userkeywords.c.keyword_id==keywords.c.keyword_id) # a class class KeywordUser(object): pass # map to it - the identity of a KeywordUser object will be # (user_id, keyword_id) since those are the primary keys involved m = mapper(KeywordUser, j, properties={ 'user_id':[users_table.c.user_id, userkeywords.c.user_id], 'keyword_id':[userkeywords.c.keyword_id, keywords.c.keyword_id] })
In both examples above, "composite" columns were added as properties to the mappers; these are aggregations of multiple columns into one mapper property, which instructs the mapper to keep both of those columns set at the same value.
Similar to mapping against a join, a plain select() object can be used with a mapper as well. Below, an example select which contains two aggregate functions and a group_by is mapped to a class:
s = select([customers, func.count(orders).label('order_count'), func.max(orders.price).label('highest_order')], customers.c.customer_id==orders.c.customer_id, group_by=[c for c in customers.c] ) class Customer(object): pass m = mapper(Customer, s)
Above, the "customers" table is joined against the "orders" table to produce a full row for each customer row, the total count of related rows in the "orders" table, and the highest price in the "orders" table, grouped against the full set of columns in the "customers" table. That query is then mapped against the Customer class. New instances of Customer will contain attributes for each column in the "customers" table as well as an "ordercount" and "highestorder" attribute. Updates to the Customer object will only be reflected in the "customers" table and not the "orders" table. This is because the primary keys of the "orders" table are not represented in this mapper and therefore the table is not affected by save or delete operations.
The first mapper created for a certain class is known as that class's "primary mapper." Other mappers can be created as well, these come in two varieties.
non_primary=True
, and represents a load-only mapper. Objects that are loaded with a secondary mapper will have their save operation processed by the primary mapper. It is also invalid to add new relation()
s to a non-primary mapper. To use this mapper with the Session, specify it to the query
method:
example:
# primary mapper mapper(User, users_table) # make a secondary mapper to load User against a join othermapper = mapper(User, users_table.join(someothertable), non_primary=True) # select result = session.query(othermapper).select()
entity_name
parameter. Instances loaded with this mapper will be totally managed by this new mapper and have no connection to the original one. Most methods on Session
include an optional entity_name
parameter in order to specify this condition.
example:
# primary mapper mapper(User, users_table) # make an entity name mapper that stores User objects in another table mapper(User, alternate_users_table, entity_name='alt') # make two User objects user1 = User() user2 = User() # save one in in the "users" table session.save(user1) # save the other in the "alternate_users_table" session.save(user2, entity_name='alt') session.flush() # select from the alternate mapper session.query(User, entity_name='alt').select()
Oftentimes it is necessary for two mappers to be related to each other. With a datamodel that consists of Users that store Addresses, you might have an Address object and want to access the "user" attribute on it, or have a User object and want to get the list of Address objects. The easiest way to do this is via the backref
keyword described in Backreferences. Although even when backreferences are used, it is sometimes necessary to explicitly specify the relations on both mappers pointing to each other.
To achieve this involves creating the first mapper by itself, then creating the second mapper referencing the first, then adding references to the first mapper to reference the second:
usermapper = mapper(User, users) mapper(Address, addresses_table, properties={ 'user':relation(User) }) usermapper.add_property('addresses', relation(Address))
Note that with a circular relationship as above, you cannot declare both relationships as "eager" relationships, since that produces a circular query situation which will generate a recursion exception. So what if you want to load an Address and its User eagerly? Just use eager options:
eagerquery = session.query(Address).options(eagerload('user')) s = eagerquery.select(Address.c.address_id==12)
A self-referential mapper is a mapper that is designed to operate with an adjacency list table. This is a table that contains one or more foreign keys back to itself, and is usually used to create hierarchical tree structures. SQLAlchemy's default model of saving items based on table dependencies is not sufficient in this case, as an adjacency list table introduces dependencies between individual rows. Fortunately, SQLAlchemy will automatically detect a self-referential mapper and do the extra lifting to make it work.
# define a self-referential table trees = Table('treenodes', engine, Column('node_id', Integer, primary_key=True), Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True), Column('node_name', String(50), nullable=False), ) # treenode class class TreeNode(object): pass # mapper defines "children" property, pointing back to TreeNode class, # with the mapper unspecified. it will point back to the primary # mapper on the TreeNode class. TreeNode.mapper = mapper(TreeNode, trees, properties={ 'children' : relation( TreeNode, cascade="all, delete-orphan" ), } ) # or, specify the circular relationship after establishing the original mapper: mymapper = mapper(TreeNode, trees) mymapper.add_property('children', relation( mymapper, cascade="all, delete-orphan" ))
This kind of mapper goes through a lot of extra effort when saving and deleting items, to determine the correct dependency graph of nodes within the tree.
A self-referential mapper where there is more than one relationship on the table requires that all join conditions be explicitly spelled out. Below is a self-referring table that contains a "parentnodeid" column to reference parent/child relationships, and a "rootnodeid" column which points child nodes back to the ultimate root node:
# define a self-referential table with several relations trees = Table('treenodes', engine, Column('node_id', Integer, primary_key=True), Column('parent_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True), Column('root_node_id', Integer, ForeignKey('treenodes.node_id'), nullable=True), Column('node_name', String(50), nullable=False), ) # treenode class class TreeNode(object): pass # define the "children" property as well as the "root" property TreeNode.mapper = mapper(TreeNode, trees, properties={ 'children' : relation( TreeNode, primaryjoin=trees.c.parent_node_id==trees.c.node_id cascade="all, delete-orphan" ), 'root' : relation( TreeNode, primaryjoin=trees.c.root_node_id=trees.c.node_id, foreignkey=trees.c.node_id, uselist=False ) } )
The "root" property on a TreeNode is a many-to-one relationship. By default, a self-referential mapper declares relationships as one-to-many, so the extra parameter foreignkey
, pointing to a column or list of columns on the remote side of a relationship, is needed to indicate a "many-to-one" self-referring relationship.
Both TreeNode examples above are available in functional form in the examples/adjacencytree
directory of the distribution.
Take any result set and feed it into a mapper to produce objects. Multiple mappers can be combined to retrieve unrelated objects from the same row in one step. The instances
method on mapper takes a ResultProxy object, which is the result type generated from SQLEngine, and delivers object instances.
class User(object): pass User.mapper = mapper(User, users_table) # select users c = users_table.select().execute() # get objects userlist = User.mapper.instances(c) # define a second class/mapper class Address(object): pass Address.mapper = mapper(Address, addresses_table) # select users and addresses in one query s = select([users_table, addresses_table], users_table.c.user_id==addresses_table.c.user_id) # execute it, and process the results with the User mapper, chained to the Address mapper r = User.mapper.instances(s.execute(), Address.mapper) # result rows are an array of objects, one for each mapper used for entry in r: user = r[0] address = r[1]
Other arguments not covered above include:
Selectable
which will take the place of the Mapper
's main table argument when performing queries.
Mappers can have functionality augmented or replaced at many points in its execution via the usage of the MapperExtension class. This class is just a series of "hooks" where various functionality takes place. An application can make its own MapperExtension objects, overriding only the methods it needs. Methods that are not overridden return the special value sqlalchemy.orm.mapper.EXT_PASS
, which indicates the operation should proceed as normally.
class MapperExtension(object): def select_by(self, query, *args, **kwargs): """overrides the select_by method of the Query object""" def select(self, query, *args, **kwargs): """overrides the select method of the Query object""" def create_instance(self, mapper, session, row, imap, class_): """called when a new object instance is about to be created from a row. the method can choose to create the instance itself, or it can return None to indicate normal object creation should take place. mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set class_ - the class we are mapping. """ def append_result(self, mapper, session, row, imap, result, instance, isnew, populate_existing=False): """called when an object instance is being appended to a result list. If this method returns True, it is assumed that the mapper should do the appending, else if this method returns False, it is assumed that the append was handled by this method. mapper - the mapper doing the operation row - the result row from the database imap - a dictionary that is storing the running set of objects collected from the current result set result - an instance of util.HistoryArraySet(), which may be an attribute on an object if this is a related object load (lazy or eager). use result.append_nohistory(value) to append objects to this list. instance - the object instance to be appended to the result isnew - indicates if this is the first time we have seen this object instance in the current result set. if you are selecting from a join, such as an eager load, you might see the same object instance many times in the same result set. populate_existing - usually False, indicates if object instances that were already in the main identity map, i.e. were loaded by a previous select(), get their attributes overwritten """ def populate_instance(self, mapper, session, instance, row, identitykey, imap, isnew): """called right before the mapper, after creating an instance from a row, passes the row to its MapperProperty objects which are responsible for populating the object's attributes. If this method returns True, it is assumed that the mapper should do the appending, else if this method returns False, it is assumed that the append was handled by this method. Essentially, this method is used to have a different mapper populate the object: def populate_instance(self, mapper, session, instance, row, identitykey, imap, isnew): othermapper.populate_instance(session, instance, row, identitykey, imap, isnew, frommapper=mapper) return True """ def before_insert(self, mapper, connection, instance): """called before an object instance is INSERTed into its table. this is a good place to set up primary key values and such that arent handled otherwise.""" def before_update(self, mapper, connection, instance): """called before an object instnace is UPDATED""" def after_update(self, mapper, connection, instance): """called after an object instnace is UPDATED""" def after_insert(self, mapper, connection, instance): """called after an object instance has been INSERTed""" def before_delete(self, mapper, connection, instance): """called before an object instance is DELETEed""" def after_delete(self, mapper, connection, instance): """called after an object instance is DELETEed"""
To use MapperExtension, make your own subclass of it and just send it off to a mapper:
m = mapper(User, users_table, extension=MyExtension())
Multiple extensions will be chained together and processed in order; they are specified as a list:
m = mapper(User, users_table, extension=[ext1, ext2, ext3])