What is a Trigger:
In ASE, trigger is a piece of SQL code that gets executed automatically when a specific action on a specific table happens.
What are the actions that fire the trigger?
A Trigger can be fired on 3 actions: Insert, Update or Delete or any combination of them.
How many Triggers we can have per table?
Prior to ASE 16.0 you could have one code running for Inserts, Updates or Deletes or for partial or total group. I.e. one trigger for update and delete or for 3 actions altogether.
Starting from ASE16.0 you can have multiple triggers per action for every table and you can also specify the order of execution. This is a helpful addition if you are using triggers for multiple purposes (e.g. Auditing and referential integrity) then you can have two triggers for insert, one to enforce the referential integrity and one to perform the auditing.
Can a Trigger capture the before and after values of the firing action?
Yes. every Trigger has internal virtual tables Deleted and Inserted that contain one row in Inserted for each insert, one row in Deleted for every deleted row and one row in both Inserted and Updated table for every Updated row (Old row as deleted and new row as inserted).
You can select from these two tables from inside the trigger only and the data will be row specific only.
Can I Temporarily Disable the Trigger?
Yes. You can disable any trigger and re-enable it back later using alter table command.
alter table [database_name.[owner_name].]table_name
{enable | disable} trigger [trigger_name]
You can check the trigger status using:
sp_help trigger_name
Also you can check all the disabled triggers inside the database using the following SQL (status2 values not documented):
select name ,
case
when sysstat2 & 1048576 /*(0x100000)*/ != 0 then object_name (instrig)+" Insert trigger disabled"
when sysstat2 & 2097152 /*(0x200000)*/ != 0 then object_name(deltrig)+" Delete trigger disabled"
when sysstat2 & 4194304 /*(0x400000)*/ != 0 then object_name(updtrig)+" Update trigger disabled"
end
from sysobjects
where type = "U" and (sysstat2 & 1048576 /*(0x100000)*/ != 0 OR
sysstat2 & 2097152 /*(0x200000)*/ != 0 OR
sysstat2 & 4194304 /*(0x400000)*/ != 0)
Disabling the trigger can be useful for testing purposes or during batch processing to reduce the impact of the trigger during an operation where there's no Auditing required.
Trigger Rollbacks:
ASE deals with the transaction of the action that fired the trigger – insert/update/delete – and the trigger code as one big transaction, so a simple rollback inside the trigger will not only rollback the trigger and its action but also the originating WHOLE transaction.
Sybase provides an option to change that behavior if you want to roll back only the trigger (and the statement that fired it) via the special roll back command:
rollback trigger
If the trigger that issues rollback trigger is nested within other triggers, the SAP ASE server rolls back all work done in these triggers up to and including the update that caused the first trigger to fire.
Trigger Self-recursion:
By default, a trigger does not call itself recursively. That is, an update trigger does not call itself in response to a second update to the same table within the trigger. If an update trigger on one column of a table results in an update to another column, the update trigger fires only once. However, you can turn on the self_recursion option of the set command to allow triggers to call themselves recursively. The "allow nested triggers" configuration variable must also be enabled for self-recursion to occur.
Triggers and performance
In terms of performance, trigger overhead is usually very low. The time involved in running a trigger is spent mostly in referencing other tables, which may be either in memory or on the database device.
The deleted and inserted trigger test tables are always in active memory. The location of other tables referenced by the trigger determines the amount of time the operation takes.
Sample Syntax:
create trigger junktrig
on junk
for insert
as
if update(a) and update(b)
print "FIRING"
/*"if update" is true for both columns.
The trigger is activated.*/
insert junk (a, b) values (1, 2)
/*"if update" is true for both columns.
The trigger is activated.*/
insert junk values (1, 2)
/*Explicit NULL: "if update" is true for both
columns. The trigger is activated.*/
insert junk values (NULL, 2)
/* If default exists on column a,
"if update" is true for either column.
The trigger is activated.*/
insert junk (b) values (2)
/* If no default exists on column a,
"if update" is not true for column a.
The trigger is not activated.*/
insert junk (b) values (2)
For deeper understanding of Triggers in SAP Sybase ASE, Please refer to the documentation .
Multi Version VS Locking in banking environments
While one of the ACID rules for database transactions is consistency. We can obviously see users asking for more concurrency, in other words, PERFORMANCE.
Almost all you need to know about triggers in SAP Sybase ASE
In ASE, trigger is a piece of SQL code that gets executed automatically when a specific action on a specific table happens.
0
Almost all you need to know about triggers in SAP Sybase ASE
Multi Version concurrency control VS Locking in banking environments
Fady Karam Database, DB2, locking, multiversion, Oracle, Performance, Sybase, transactions
While one of the ACID rules for database transactions is
consistency. We can obviously see users asking for more concurrency, that’s simply
because concurrency and response time form the equation of database throughput,
in other words, PERFORMANCE.
Measuring the database performance depends not only on how
many transactions the database server (RDBMS) can perform, but also if it’s
capable of keeping this number as high when multiple users are connected to the
server.
If we want to understand the relation between concurrency
and consistency we have to think of it as a justice scale, on one plate we put
concurrency and on the other we put consistency. So if you want to increase one
of them drastically will result in decreasing the other.
RDBMS vendors have used multiple methods for concurrency
control, but the goal is unique: how to provide your end user with the best
concurrency and consistency (remember the scale) while keeping the transaction
ACID rules achieved. Vendors like Oracle use multi version concurrency control,
while Vendors like DB2, Sybase and Microsoft use two phase locking concurrency
control. Now let’s understand first how each one works then analyze how that
can affect banking environments.
In Two phase locking method, if transaction A is updating a
row, and transaction B is trying to read the same row then transaction B has to
wait until transaction A Commit/Rollback then the server return the final value
to transaction B. Assuming you are trying to select some book price while the author
is updating the price, you can’t get the price until the author of the book
commit or rollback whatever changes he is doing. In database world this is
called LOCK WAIT TIME.
On the other hand, if transaction A is reading some record
and transaction B is trying to read the same record then no need to wait at
all, but if transaction C is trying to modify that record then it will have to
wait until both transaction A and B finish reading and release the read lock.
In Multi Version concurrency control, if transaction A is
updating a row and transaction B is trying to read it then the server will create
TWO VERSIONS of the data. One version that is committed – which will be
returned to transaction B - and one that is being updated but not yet
committed. The committed version will be available for all other readers while
the uncommitted version will be available only for Transaction A, in other
words, WRITERS DON’T BLOCK READERS. From that concept we can notice that the
problem will rise if transaction A committed the change, then the value that
was read by transaction B is no more correct. Or if transaction B tried to read
the same record again after transaction A commits, the value can’t be
repeatable. In other words, this is an unrepeatable read. We can apply the same
rule if 2 transactions are trying to read the same data. The only difference is
we can’t say an unrepeatable read can happen in the later situation.
Back to the main subject, how banking –or any monetary
transaction – can be affected by Multi Version concurrency control. If you are
trying to check your balance while a deduction operation is being applied on
your balance. Then your reading of your balance will not be accurate if the
deduction is committed. It can’t affect any other withdrawals or other
deductions though because at the end writers will block writers even when using
Multi Versions, i.e. same as Two phase locking mechanism.
The above behavior (incorrect balance for example) can never
happen when using Two Phase lock since your balance query will wait until the
deduction is committed or rolled back.
From another point of view, when it’s related to non-monitory
transactions, like checking product details on amazon.com. It will never harm
having an unrepeatable read if it will provide more concurrency to millions of
users trying to read a price or some description of a product.
What remains to be said is in 2010, according to the
University of Sydney – and quoting from there article here :
“Dr Michael Cahill PhD thesis entitled Serializable
Isolation for Snapshot Databases saw him develop an algorithm to avoid data
inconsistency in databases accessed by multiple users.
The algorithm tackles problems occurring when many clients
access a database simultaneously. While organizations like banks lock their
databases so only one person can access a single piece of information at a
time, this isn't tenable for retail websites where thousands of people often
want to the same book title at the same time. Most large databases therefore
opt for what's called a multi version system. The existing multi version
algorithms work well most of the time but their potential to produce
inconsistent data can have serious consequences, says Dr Cahill. He cites the
example of doctors scheduled to work at the same time changing their rosters
simultaneously so that no one is rostered at all.
"I looked at algorithms used in real
systems and found a practical way to maintain consistency," he says.
"I've changed the algorithm in multi version systems so they keep track of
more information. By tracking some additional information about concurrent
operations, the algorithm guarantees the traditional kind of correctness expected
of a database, or, to use the correct terminology, prevents anomalies." ”