summaryrefslogtreecommitdiff
path: root/lock
diff options
context:
space:
mode:
authorZhang Qiang <qiang.z.zhang@intel.com>2012-05-29 11:25:24 +0800
committerZhang Qiang <qiang.z.zhang@intel.com>2012-05-29 11:25:24 +0800
commite776056ea09ba0b6d9505ced6913c9190a12d632 (patch)
tree092838f2a86042abc586aa5576e36ae6cb47e256 /lock
parent2e082c838d2ca750f5daac6dcdabecc22dfd4e46 (diff)
downloaddb4-e776056ea09ba0b6d9505ced6913c9190a12d632.tar.gz
db4-e776056ea09ba0b6d9505ced6913c9190a12d632.tar.bz2
db4-e776056ea09ba0b6d9505ced6913c9190a12d632.zip
updated with Tizen:Base source codes
Diffstat (limited to 'lock')
-rw-r--r--lock/Design301
-rw-r--r--lock/lock.c1879
-rw-r--r--lock/lock_deadlock.c1045
-rw-r--r--lock/lock_failchk.c111
-rw-r--r--lock/lock_id.c460
-rw-r--r--lock/lock_list.c364
-rw-r--r--lock/lock_method.c536
-rw-r--r--lock/lock_region.c479
-rw-r--r--lock/lock_stat.c751
-rw-r--r--lock/lock_stub.c506
-rw-r--r--lock/lock_timer.c174
-rw-r--r--lock/lock_util.c97
12 files changed, 0 insertions, 6703 deletions
diff --git a/lock/Design b/lock/Design
deleted file mode 100644
index e423ff7..0000000
--- a/lock/Design
+++ /dev/null
@@ -1,301 +0,0 @@
-# $Id$
-
-Synchronization in the Locking Subsystem
-
-This is a document that describes how we implemented fine-grain locking
-in the lock manager (that is, locking on a hash bucket level instead of
-locking the entire region). We found that the increase in concurrency
-was not sufficient to warrant the increase in complexity or the additional
-cost of performing each lock operation. Therefore, we don't use this
-any more. Should we have to do fine-grain locking in a future release,
-this would be a reasonable starting point.
-
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-1. Data structures
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-
-The lock manager maintains 3 different structures:
-
-Objects (__db_lockobj):
- Describes an object that is locked. When used with DB, this consists
- of a __db_ilock (a file identifier and a page number).
-
-Lockers (__db_locker):
- Identifies a specific locker ID and maintains the head of a list of
- locks held by a locker (for using during transaction commit/abort).
-
-Locks (__db_lock):
- Describes a particular object lock held on behalf of a particular
- locker id.
-
-Objects and Lockers reference Locks.
-
-These structures are organized via two synchronized hash tables. Each
-hash table consists of two physical arrays: the array of actual hash
-buckets and an array of mutexes so we can lock individual buckets, rather
-than the whole table.
-
-One hash table contains Objects and the other hash table contains Lockers.
-Objects contain two lists of locks, waiters and holders: holders currently
-hold a lock on the Object, waiters are lock waiting to be granted.
-Lockers are a single linked list that connects the Locks held on behalf
-of the specific locker ID.
-
-In the diagram below:
-
-Locker ID #1 holds a lock on Object #1 (L1) and Object #2 (L5), and is
-waiting on a lock on Object #1 (L3).
-
-Locker ID #2 holds a lock on Object #1 (L2) and is waiting on a lock for
-Object #2 (L7).
-
-Locker ID #3 is waiting for a lock on Object #2 (L6).
-
- OBJECT -----------------------
- HASH | |
- ----|------------- |
- ________ _______ | | ________ | |
- | |-->| O1 |--|---|-->| O2 | | |
- |_______| |_____| | | |______| V |
- | | W H--->L1->L2 W H--->L5 | holders
- |_______| | | | | V
- | | ------->L3 \ ------->L6------>L7 waiters
- |_______| / \ \
- . . / \ \
- . . | \ \
- . . | \ -----------
- |_______| | -------------- |
- | | ____|____ ___|_____ _|______
- |_______| | | | | | |
- | | | LID1 | | LID2 | | LID3 |
- |_______| |_______| |_______| |______|
- ^ ^ ^
- | | |
- ___|________________________|________|___
- LOCKER | | | | | | | | |
- HASH | | | | | | | | |
- | | | | | | | | |
- |____|____|____|____|____|____|____|____|
-
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-2. Synchronization
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-
-There are four types of mutexes in the subsystem.
-
-Object mutexes;
- These map one-to-one to each bucket in the Object hash table.
- Holding a mutex on an Object bucket secures all the Objects in
- that bucket as well as the Lock structures linked from those
- Objects. All fields in the Locks EXCEPT the Locker links (the
- links that attach Locks by Locker ID) are protected by these
- mutexes.
-
-Locker mutexes:
- These map one-to-one to each bucket in the Locker hash table.
- Holding a mutex on a Locker bucket secures the Locker structures
- and the Locker links in the Locks.
-
-Memory mutex:
- This mutex allows calls to allocate/free memory, i.e. calls to
- __db_shalloc and __db_shalloc_free, as well as manipulation of
- the Object, Locker and Lock free lists.
-
-Region mutex:
- This mutex is currently only used to protect the locker ids.
- It may also be needed later to provide exclusive access to
- the region for deadlock detection.
-
-Creating or removing a Lock requires locking both the Object lock and the
-Locker lock (and eventually the shalloc lock to return the item to the
-free list).
-
-The locking hierarchy is as follows:
-
- The Region mutex may never be acquired after any other mutex.
-
- The Object mutex may be acquired after the Region mutex.
-
- The Locker mutex may be acquired after the Region and Object
- mutexes.
-
- The Memory mutex may be acquired after any mutex.
-
-So, if both and Object mutex and a Locker mutex are going to be acquired,
-the Object mutex must be acquired first.
-
-The Memory mutex may be acquired after any other mutex, but no other mutexes
-can be acquired once the Memory mutex is held.
-
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-3. The algorithms:
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-The locking subsystem supports four basic operations:
- Get a Lock (lock_get)
-
- Release a Lock (lock_put)
-
- Release all the Locks on a specific Object (lock_vec)
-
- Release all the Locks for a specific Locker (lock_vec)
-
-Get a lock:
- Acquire Object bucket mutex.
- Acquire Locker bucket mutex.
-
- Acquire Memory mutex.
- If the Object does not exist
- Take an Object off the freelist.
- If the Locker doesn't exist
- Take a Locker off the freelist.
- Take a Lock off the free list.
- Release Memory mutex.
-
- Add Lock to the Object list.
- Add Lock to the Locker list.
- Release Locker bucket mutex
-
- If the lock cannot be granted
- Release Object bucket mutex
- Acquire lock mutex (blocks)
-
- Acquire Object bucket mutex
- If lock acquisition did not succeed (e.g, deadlock)
- Acquire Locker bucket mutex
- If locker should be destroyed
- Remove locker from hash table
- Acquire Memory mutex
- Return locker to free list
- Release Memory mutex
- Release Locker bucket mutex
-
- If object should be released
- Acquire Memory mutex
- Return object to free list
- Release Memory mutex
-
- Release Object bucket mutex
-
-Release a lock:
- Acquire Object bucket mutex.
- (Requires that we be able to find the Object hash bucket
- without looking inside the Lock itself.)
-
- If releasing a single lock and the user provided generation number
- doesn't match the Lock's generation number, the Lock has been reused
- and we return failure.
-
- Enter lock_put_internal:
- if the Lock is still on the Object's lists:
- Increment Lock's generation number.
- Remove Lock from the Object's list (NULL link fields).
- Promote locks for the Object.
-
- Enter locker_list_removal
- Acquire Locker bucket mutex.
- If Locker doesn't exist:
- Release Locker bucket mutex
- Release Object bucket mutex
- Return error.
- Else if Locker marked as deleted:
- dont_release = TRUE
- Else
- Remove Lock from Locker list.
- If Locker has no more locks
- Remove Locker from table.
- Acquire Memory mutex.
- Return Locker to free list
- Release Memory mutex
- Release Locker bucket mutex.
- Exit locker_list_removal
-
- If (!dont_release)
- Acquire Memory mutex
- Return Lock to free list
- Release Memory mutex
-
- Exit lock_put_internal
-
- Release Object bucket mutex
-
-Release all the Locks on a specific Object (lock_vec, DB_PUT_ALL_OBJ):
-
- Acquire Object bucket mutex.
-
- For each lock on the waiter list:
- lock_put_internal
- For each lock on the holder list:
- lock_put_internal
-
- Release Object bucket mutex.
-
-Release all the Locks for a specific Locker (lock_vec, DB_PUT_ALL):
-
- Acquire Locker bucket mutex.
- Mark Locker deleted.
- Release Locker mutex.
-
- For each lock on the Locker's list:
- Remove from locker's list
- (The lock could get put back on the free list in
- lock_put and then could get reallocated and the
- act of setting its locker links could clobber us.)
- Perform "Release a Lock" above: skip locker_list_removal.
-
- Acquire Locker bucket mutex.
- Remove Locker
- Release Locker mutex.
-
- Acquire Memory mutex
- Return Locker to free list
- Release Memory mutex
-
-Deadlock detection (lock_detect):
-
- For each bucket in Object table
- Acquire the Object bucket mutex.
- create waitsfor
-
- For each bucket in Object table
- Release the Object mutex.
-
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-FAQ:
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
-Q: Why do you need generation numbers?
-A: If a lock has been released due to a transaction abort (potentially in a
- different process), and then lock is released by a thread of control
- unaware of the abort, the lock might have potentially been re-allocated
- to a different object. The generation numbers detect this problem.
-
- Note, we assume that reads/writes of lock generation numbers are atomic,
- if they are not, it is theoretically possible that a re-allocated lock
- could be mistaken for another lock.
-
-Q: Why is is safe to walk the Locker list without holding any mutexes at
- all?
-A: Locks are created with both the Object and Locker bucket mutexes held.
- Once created, they removed in two ways:
-
- a) when a specific Lock is released, in which case, the Object and
- Locker bucket mutexes are again held, and
-
- b) when all Locks for a specific Locker Id is released.
-
- In case b), the Locker bucket mutex is held while the Locker chain is
- marked as "destroyed", which blocks any further access to the Locker
- chain. Then, each individual Object bucket mutex is acquired when each
- individual Lock is removed.
-
-Q: What are the implications of doing fine grain locking?
-
-A: Since we no longer globally lock the entire region, lock_vec will no
- longer be atomic. We still execute the items in a lock_vec in order,
- so things like lock-coupling still work, but you can't make any
- guarantees about atomicity.
-
-Q: How do I configure for FINE_GRAIN locking?
-
-A: We currently do not support any automatic configuration for FINE_GRAIN
- locking. When we do, will need to document that atomicity discussion
- listed above (it is bug-report #553).
diff --git a/lock/lock.c b/lock/lock.c
deleted file mode 100644
index 1e4243c..0000000
--- a/lock/lock.c
+++ /dev/null
@@ -1,1879 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-#include "dbinc/log.h"
-
-static int __lock_allocobj __P((DB_LOCKTAB *, u_int32_t));
-static int __lock_alloclock __P((DB_LOCKTAB *, u_int32_t));
-static int __lock_freelock __P((DB_LOCKTAB *,
- struct __db_lock *, DB_LOCKER *, u_int32_t));
-static int __lock_getobj
- __P((DB_LOCKTAB *, const DBT *, u_int32_t, int, DB_LOCKOBJ **));
-static int __lock_get_api __P((ENV *,
- u_int32_t, u_int32_t, const DBT *, db_lockmode_t, DB_LOCK *));
-static int __lock_inherit_locks __P ((DB_LOCKTAB *, DB_LOCKER *, u_int32_t));
-static int __lock_is_parent __P((DB_LOCKTAB *, roff_t, DB_LOCKER *));
-static int __lock_put_internal __P((DB_LOCKTAB *,
- struct __db_lock *, u_int32_t, u_int32_t));
-static int __lock_put_nolock __P((ENV *, DB_LOCK *, int *, u_int32_t));
-static int __lock_remove_waiter __P((DB_LOCKTAB *,
- DB_LOCKOBJ *, struct __db_lock *, db_status_t));
-static int __lock_trade __P((ENV *, DB_LOCK *, DB_LOCKER *));
-static int __lock_vec_api __P((ENV *,
- u_int32_t, u_int32_t, DB_LOCKREQ *, int, DB_LOCKREQ **));
-
-static const char __db_lock_invalid[] = "%s: Lock is no longer valid";
-static const char __db_locker_invalid[] = "Locker is not valid";
-
-/*
- * __lock_vec_pp --
- * ENV->lock_vec pre/post processing.
- *
- * PUBLIC: int __lock_vec_pp __P((DB_ENV *,
- * PUBLIC: u_int32_t, u_int32_t, DB_LOCKREQ *, int, DB_LOCKREQ **));
- */
-int
-__lock_vec_pp(dbenv, lid, flags, list, nlist, elistp)
- DB_ENV *dbenv;
- u_int32_t lid, flags;
- int nlist;
- DB_LOCKREQ *list, **elistp;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_vec", DB_INIT_LOCK);
-
- /* Validate arguments. */
- if ((ret = __db_fchk(env,
- "DB_ENV->lock_vec", flags, DB_LOCK_NOWAIT)) != 0)
- return (ret);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env,
- (__lock_vec_api(env, lid, flags, list, nlist, elistp)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-static int
-__lock_vec_api(env, lid, flags, list, nlist, elistp)
- ENV *env;
- u_int32_t lid, flags;
- int nlist;
- DB_LOCKREQ *list, **elistp;
-{
- DB_LOCKER *sh_locker;
- int ret;
-
- if ((ret =
- __lock_getlocker(env->lk_handle, lid, 0, &sh_locker)) == 0)
- ret = __lock_vec(env, sh_locker, flags, list, nlist, elistp);
- return (ret);
-}
-
-/*
- * __lock_vec --
- * ENV->lock_vec.
- *
- * Vector lock routine. This function takes a set of operations
- * and performs them all at once. In addition, lock_vec provides
- * functionality for lock inheritance, releasing all locks for a
- * given locker (used during transaction commit/abort), releasing
- * all locks on a given object, and generating debugging information.
- *
- * PUBLIC: int __lock_vec __P((ENV *,
- * PUBLIC: DB_LOCKER *, u_int32_t, DB_LOCKREQ *, int, DB_LOCKREQ **));
- */
-int
-__lock_vec(env, sh_locker, flags, list, nlist, elistp)
- ENV *env;
- DB_LOCKER *sh_locker;
- u_int32_t flags;
- int nlist;
- DB_LOCKREQ *list, **elistp;
-{
- struct __db_lock *lp, *next_lock;
- DB_LOCK lock; DB_LOCKOBJ *sh_obj;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DBT *objlist, *np;
- u_int32_t ndx;
- int did_abort, i, ret, run_dd, upgrade, writes;
-
- /* Check if locks have been globally turned off. */
- if (F_ISSET(env->dbenv, DB_ENV_NOLOCKING))
- return (0);
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
-
- run_dd = 0;
- LOCK_SYSTEM_LOCK(lt, region);
- for (i = 0, ret = 0; i < nlist && ret == 0; i++)
- switch (list[i].op) {
- case DB_LOCK_GET_TIMEOUT:
- LF_SET(DB_LOCK_SET_TIMEOUT);
- /* FALLTHROUGH */
- case DB_LOCK_GET:
- if (IS_RECOVERING(env)) {
- LOCK_INIT(list[i].lock);
- break;
- }
- ret = __lock_get_internal(lt,
- sh_locker, flags, list[i].obj,
- list[i].mode, list[i].timeout, &list[i].lock);
- break;
- case DB_LOCK_INHERIT:
- ret = __lock_inherit_locks(lt, sh_locker, flags);
- break;
- case DB_LOCK_PUT:
- ret = __lock_put_nolock(env,
- &list[i].lock, &run_dd, flags);
- break;
- case DB_LOCK_PUT_ALL: /* Put all locks. */
- case DB_LOCK_PUT_READ: /* Put read locks. */
- case DB_LOCK_UPGRADE_WRITE:
- /* Upgrade was_write and put read locks. */
- /*
- * Since the locker may hold no
- * locks (i.e., you could call abort before you've
- * done any work), it's perfectly reasonable for there
- * to be no locker; this is not an error.
- */
- if (sh_locker == NULL)
- /*
- * If ret is set, then we'll generate an
- * error. If it's not set, we have nothing
- * to do.
- */
- break;
- upgrade = 0;
- writes = 1;
- if (list[i].op == DB_LOCK_PUT_READ)
- writes = 0;
- else if (list[i].op == DB_LOCK_UPGRADE_WRITE) {
- if (F_ISSET(sh_locker, DB_LOCKER_DIRTY))
- upgrade = 1;
- writes = 0;
- }
- objlist = list[i].obj;
- if (objlist != NULL) {
- /*
- * We know these should be ilocks,
- * but they could be something else,
- * so allocate room for the size too.
- */
- objlist->size =
- sh_locker->nwrites * sizeof(DBT);
- if ((ret = __os_malloc(env,
- objlist->size, &objlist->data)) != 0)
- goto up_done;
- memset(objlist->data, 0, objlist->size);
- np = (DBT *) objlist->data;
- } else
- np = NULL;
-
- /* Now traverse the locks, releasing each one. */
- for (lp = SH_LIST_FIRST(&sh_locker->heldby, __db_lock);
- lp != NULL; lp = next_lock) {
- sh_obj = (DB_LOCKOBJ *)
- ((u_int8_t *)lp + lp->obj);
- next_lock = SH_LIST_NEXT(lp,
- locker_links, __db_lock);
- if (writes == 1 ||
- lp->mode == DB_LOCK_READ ||
- lp->mode == DB_LOCK_READ_UNCOMMITTED) {
- SH_LIST_REMOVE(lp,
- locker_links, __db_lock);
- sh_obj = (DB_LOCKOBJ *)
- ((u_int8_t *)lp + lp->obj);
- ndx = sh_obj->indx;
- OBJECT_LOCK_NDX(lt, region, ndx);
- /*
- * We are not letting lock_put_internal
- * unlink the lock, so we'll have to
- * update counts here.
- */
- if (lp->status == DB_LSTAT_HELD) {
- DB_ASSERT(env,
- sh_locker->nlocks != 0);
- sh_locker->nlocks--;
- if (IS_WRITELOCK(lp->mode))
- sh_locker->nwrites--;
- }
- ret = __lock_put_internal(lt, lp,
- sh_obj->indx,
- DB_LOCK_FREE | DB_LOCK_DOALL);
- OBJECT_UNLOCK(lt, region, ndx);
- if (ret != 0)
- break;
- continue;
- }
- if (objlist != NULL) {
- DB_ASSERT(env, (u_int8_t *)np <
- (u_int8_t *)objlist->data +
- objlist->size);
- np->data = SH_DBT_PTR(&sh_obj->lockobj);
- np->size = sh_obj->lockobj.size;
- np++;
- }
- }
- if (ret != 0)
- goto up_done;
-
- if (objlist != NULL)
- if ((ret = __lock_fix_list(env,
- objlist, sh_locker->nwrites)) != 0)
- goto up_done;
- switch (list[i].op) {
- case DB_LOCK_UPGRADE_WRITE:
- /*
- * Upgrade all WWRITE locks to WRITE so
- * that we can abort a transaction which
- * was supporting dirty readers.
- */
- if (upgrade != 1)
- goto up_done;
- SH_LIST_FOREACH(lp, &sh_locker->heldby,
- locker_links, __db_lock) {
- if (lp->mode != DB_LOCK_WWRITE)
- continue;
- lock.off = R_OFFSET(&lt->reginfo, lp);
- lock.gen = lp->gen;
- F_SET(sh_locker, DB_LOCKER_INABORT);
- if ((ret = __lock_get_internal(lt,
- sh_locker, flags | DB_LOCK_UPGRADE,
- NULL, DB_LOCK_WRITE, 0, &lock)) !=0)
- break;
- }
- up_done:
- /* FALLTHROUGH */
- case DB_LOCK_PUT_READ:
- case DB_LOCK_PUT_ALL:
- break;
- default:
- break;
- }
- break;
- case DB_LOCK_PUT_OBJ:
- /* Remove all the locks associated with an object. */
- OBJECT_LOCK(lt, region, list[i].obj, ndx);
- if ((ret = __lock_getobj(lt, list[i].obj,
- ndx, 0, &sh_obj)) != 0 || sh_obj == NULL) {
- if (ret == 0)
- ret = EINVAL;
- OBJECT_UNLOCK(lt, region, ndx);
- break;
- }
-
- /*
- * Go through both waiters and holders. Don't bother
- * to run promotion, because everyone is getting
- * released. The processes waiting will still get
- * awakened as their waiters are released.
- */
- for (lp = SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock);
- ret == 0 && lp != NULL;
- lp = SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock))
- ret = __lock_put_internal(lt, lp, ndx,
- DB_LOCK_UNLINK |
- DB_LOCK_NOPROMOTE | DB_LOCK_DOALL);
-
- /*
- * On the last time around, the object will get
- * reclaimed by __lock_put_internal, structure the
- * loop carefully so we do not get bitten.
- */
- for (lp = SH_TAILQ_FIRST(&sh_obj->holders, __db_lock);
- ret == 0 && lp != NULL;
- lp = next_lock) {
- next_lock = SH_TAILQ_NEXT(lp, links, __db_lock);
- ret = __lock_put_internal(lt, lp, ndx,
- DB_LOCK_UNLINK |
- DB_LOCK_NOPROMOTE | DB_LOCK_DOALL);
- }
- OBJECT_UNLOCK(lt, region, ndx);
- break;
-
- case DB_LOCK_TIMEOUT:
- ret = __lock_set_timeout_internal(env,
- sh_locker, 0, DB_SET_TXN_NOW);
- break;
-
- case DB_LOCK_TRADE:
- /*
- * INTERNAL USE ONLY.
- * Change the holder of the lock described in
- * list[i].lock to the locker-id specified by
- * the locker parameter.
- */
- /*
- * You had better know what you're doing here.
- * We are trading locker-id's on a lock to
- * facilitate file locking on open DB handles.
- * We do not do any conflict checking on this,
- * so heaven help you if you use this flag under
- * any other circumstances.
- */
- ret = __lock_trade(env, &list[i].lock, sh_locker);
- break;
-#if defined(DEBUG) && defined(HAVE_STATISTICS)
- case DB_LOCK_DUMP:
- if (sh_locker == NULL)
- break;
-
- SH_LIST_FOREACH(
- lp, &sh_locker->heldby, locker_links, __db_lock)
- __lock_printlock(lt, NULL, lp, 1);
- break;
-#endif
- default:
- __db_errx(env,
- "Invalid lock operation: %d", list[i].op);
- ret = EINVAL;
- break;
- }
-
- if (ret == 0 && region->detect != DB_LOCK_NORUN &&
- (region->need_dd || timespecisset(&region->next_timeout)))
- run_dd = 1;
- LOCK_SYSTEM_UNLOCK(lt, region);
-
- if (run_dd)
- (void)__lock_detect(env, region->detect, &did_abort);
-
- if (ret != 0 && elistp != NULL)
- *elistp = &list[i - 1];
-
- return (ret);
-}
-
-/*
- * __lock_get_pp --
- * ENV->lock_get pre/post processing.
- *
- * PUBLIC: int __lock_get_pp __P((DB_ENV *,
- * PUBLIC: u_int32_t, u_int32_t, DBT *, db_lockmode_t, DB_LOCK *));
- */
-int
-__lock_get_pp(dbenv, locker, flags, obj, lock_mode, lock)
- DB_ENV *dbenv;
- u_int32_t locker, flags;
- DBT *obj;
- db_lockmode_t lock_mode;
- DB_LOCK *lock;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_get", DB_INIT_LOCK);
-
- /* Validate arguments. */
- if ((ret = __db_fchk(env, "DB_ENV->lock_get", flags,
- DB_LOCK_NOWAIT | DB_LOCK_UPGRADE | DB_LOCK_SWITCH)) != 0)
- return (ret);
-
- if ((ret = __dbt_usercopy(env, obj)) != 0)
- return (ret);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env,
- (__lock_get_api(env, locker, flags, obj, lock_mode, lock)),
- 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-static int
-__lock_get_api(env, locker, flags, obj, lock_mode, lock)
- ENV *env;
- u_int32_t locker, flags;
- const DBT *obj;
- db_lockmode_t lock_mode;
- DB_LOCK *lock;
-{
- DB_LOCKER *sh_locker;
- DB_LOCKREGION *region;
- int ret;
-
- COMPQUIET(region, NULL);
-
- region = env->lk_handle->reginfo.primary;
-
- LOCK_SYSTEM_LOCK(env->lk_handle, region);
- LOCK_LOCKERS(env, region);
- ret = __lock_getlocker_int(env->lk_handle, locker, 0, &sh_locker);
- UNLOCK_LOCKERS(env, region);
- if (ret == 0)
- ret = __lock_get_internal(env->lk_handle,
- sh_locker, flags, obj, lock_mode, 0, lock);
- LOCK_SYSTEM_UNLOCK(env->lk_handle, region);
- return (ret);
-}
-
-/*
- * __lock_get --
- * ENV->lock_get.
- *
- * PUBLIC: int __lock_get __P((ENV *,
- * PUBLIC: DB_LOCKER *, u_int32_t, const DBT *, db_lockmode_t, DB_LOCK *));
- */
-int
-__lock_get(env, locker, flags, obj, lock_mode, lock)
- ENV *env;
- DB_LOCKER *locker;
- u_int32_t flags;
- const DBT *obj;
- db_lockmode_t lock_mode;
- DB_LOCK *lock;
-{
- DB_LOCKTAB *lt;
- int ret;
-
- lt = env->lk_handle;
-
- if (IS_RECOVERING(env)) {
- LOCK_INIT(*lock);
- return (0);
- }
-
- LOCK_SYSTEM_LOCK(lt, (DB_LOCKREGION *)lt->reginfo.primary);
- ret = __lock_get_internal(lt, locker, flags, obj, lock_mode, 0, lock);
- LOCK_SYSTEM_UNLOCK(lt, (DB_LOCKREGION *)lt->reginfo.primary);
- return (ret);
-}
-/*
- * __lock_alloclock -- allocate a lock from another partition.
- * We assume we have the partition locked on entry and leave
- * it unlocked on success since we will have to retry the lock operation.
- * The mutex will be locked if we are out of space.
- */
-static int
-__lock_alloclock(lt, part_id)
- DB_LOCKTAB *lt;
- u_int32_t part_id;
-{
- struct __db_lock *sh_lock;
- DB_LOCKPART *end_p, *cur_p;
- DB_LOCKREGION *region;
- int begin;
-
- region = lt->reginfo.primary;
-
- if (region->part_t_size == 1)
- goto err;
-
- begin = 0;
- sh_lock = NULL;
- cur_p = &lt->part_array[part_id];
- MUTEX_UNLOCK(lt->env, cur_p->mtx_part);
- end_p = &lt->part_array[region->part_t_size];
- /*
- * Start looking at the next partition and wrap around. If
- * we get back to our partition then raise an error.
- */
-again: for (cur_p++; sh_lock == NULL && cur_p < end_p; cur_p++) {
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
- if ((sh_lock =
- SH_TAILQ_FIRST(&cur_p->free_locks, __db_lock)) != NULL)
- SH_TAILQ_REMOVE(&cur_p->free_locks,
- sh_lock, links, __db_lock);
- MUTEX_UNLOCK(lt->env, cur_p->mtx_part);
- }
- if (sh_lock != NULL) {
- cur_p = &lt->part_array[part_id];
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
- SH_TAILQ_INSERT_HEAD(&cur_p->free_locks,
- sh_lock, links, __db_lock);
- STAT(cur_p->part_stat.st_locksteals++);
- MUTEX_UNLOCK(lt->env, cur_p->mtx_part);
- return (0);
- }
- if (!begin) {
- begin = 1;
- cur_p = lt->part_array;
- end_p = &lt->part_array[part_id];
- goto again;
- }
-
- cur_p = &lt->part_array[part_id];
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
-
-err: return (__lock_nomem(lt->env, "lock entries"));
-}
-
-/*
- * __lock_get_internal --
- * All the work for lock_get (and for the GET option of lock_vec) is done
- * inside of lock_get_internal.
- *
- * PUBLIC: int __lock_get_internal __P((DB_LOCKTAB *, DB_LOCKER *, u_int32_t,
- * PUBLIC: const DBT *, db_lockmode_t, db_timeout_t, DB_LOCK *));
- */
-int
-__lock_get_internal(lt, sh_locker, flags, obj, lock_mode, timeout, lock)
- DB_LOCKTAB *lt;
- DB_LOCKER *sh_locker;
- u_int32_t flags;
- const DBT *obj;
- db_lockmode_t lock_mode;
- db_timeout_t timeout;
- DB_LOCK *lock;
-{
- struct __db_lock *newl, *lp;
- ENV *env;
- DB_LOCKOBJ *sh_obj;
- DB_LOCKREGION *region;
- DB_THREAD_INFO *ip;
- u_int32_t ndx, part_id;
- int did_abort, ihold, grant_dirty, no_dd, ret, t_ret;
- roff_t holder, sh_off;
-
- /*
- * We decide what action to take based on what locks are already held
- * and what locks are in the wait queue.
- */
- enum {
- GRANT, /* Grant the lock. */
- UPGRADE, /* Upgrade the lock. */
- HEAD, /* Wait at head of wait queue. */
- SECOND, /* Wait as the second waiter. */
- TAIL /* Wait at tail of the wait queue. */
- } action;
-
- env = lt->env;
- region = lt->reginfo.primary;
-
- /* Check if locks have been globally turned off. */
- if (F_ISSET(env->dbenv, DB_ENV_NOLOCKING))
- return (0);
-
- if (sh_locker == NULL) {
- __db_errx(env, "Locker does not exist");
- return (EINVAL);
- }
-
- no_dd = ret = 0;
- newl = NULL;
- sh_obj = NULL;
-
- /* Check that the lock mode is valid. */
- if (lock_mode >= (db_lockmode_t)region->nmodes) {
- __db_errx(env, "DB_ENV->lock_get: invalid lock mode %lu",
- (u_long)lock_mode);
- return (EINVAL);
- }
-
-again: if (obj == NULL) {
- DB_ASSERT(env, LOCK_ISSET(*lock));
- lp = R_ADDR(&lt->reginfo, lock->off);
- sh_obj = (DB_LOCKOBJ *)((u_int8_t *)lp + lp->obj);
- ndx = sh_obj->indx;
- OBJECT_LOCK_NDX(lt, region, ndx);
- } else {
- /* Allocate a shared memory new object. */
- OBJECT_LOCK(lt, region, obj, lock->ndx);
- ndx = lock->ndx;
- if ((ret = __lock_getobj(lt, obj, lock->ndx, 1, &sh_obj)) != 0)
- goto err;
- }
-
-#ifdef HAVE_STATISTICS
- if (LF_ISSET(DB_LOCK_UPGRADE))
- lt->obj_stat[ndx].st_nupgrade++;
- else if (!LF_ISSET(DB_LOCK_SWITCH))
- lt->obj_stat[ndx].st_nrequests++;
-#endif
-
- /*
- * Figure out if we can grant this lock or if it should wait.
- * By default, we can grant the new lock if it does not conflict with
- * anyone on the holders list OR anyone on the waiters list.
- * The reason that we don't grant if there's a conflict is that
- * this can lead to starvation (a writer waiting on a popularly
- * read item will never be granted). The downside of this is that
- * a waiting reader can prevent an upgrade from reader to writer,
- * which is not uncommon.
- *
- * There are two exceptions to the no-conflict rule. First, if
- * a lock is held by the requesting locker AND the new lock does
- * not conflict with any other holders, then we grant the lock.
- * The most common place this happens is when the holder has a
- * WRITE lock and a READ lock request comes in for the same locker.
- * If we do not grant the read lock, then we guarantee deadlock.
- * Second, dirty readers are granted if at all possible while
- * avoiding starvation, see below.
- *
- * In case of conflict, we put the new lock on the end of the waiters
- * list, unless we are upgrading or this is a dirty reader in which
- * case the locker goes at or near the front of the list.
- */
- ihold = 0;
- grant_dirty = 0;
- holder = 0;
-
- /*
- * SWITCH is a special case, used by the queue access method
- * when we want to get an entry which is past the end of the queue.
- * We have a DB_READ_LOCK and need to switch it to DB_LOCK_WAIT and
- * join the waiters queue. This must be done as a single operation
- * so that another locker cannot get in and fail to wake us up.
- */
- if (LF_ISSET(DB_LOCK_SWITCH))
- lp = NULL;
- else
- lp = SH_TAILQ_FIRST(&sh_obj->holders, __db_lock);
-
- sh_off = R_OFFSET(&lt->reginfo, sh_locker);
- for (; lp != NULL; lp = SH_TAILQ_NEXT(lp, links, __db_lock)) {
- DB_ASSERT(env, lp->status != DB_LSTAT_FREE);
- if (sh_off == lp->holder) {
- if (lp->mode == lock_mode &&
- lp->status == DB_LSTAT_HELD) {
- if (LF_ISSET(DB_LOCK_UPGRADE))
- goto upgrade;
-
- /*
- * Lock is held, so we can increment the
- * reference count and return this lock
- * to the caller. We do not count reference
- * increments towards the locks held by
- * the locker.
- */
- lp->refcount++;
- lock->off = R_OFFSET(&lt->reginfo, lp);
- lock->gen = lp->gen;
- lock->mode = lp->mode;
- goto done;
- } else {
- ihold = 1;
- }
- } else if (__lock_is_parent(lt, lp->holder, sh_locker))
- ihold = 1;
- else if (CONFLICTS(lt, region, lp->mode, lock_mode))
- break;
- else if (lp->mode == DB_LOCK_READ ||
- lp->mode == DB_LOCK_WWRITE) {
- grant_dirty = 1;
- holder = lp->holder;
- }
- }
-
- /*
- * If there are conflicting holders we will have to wait. If we
- * already hold a lock on this object or are doing an upgrade or
- * this is a dirty reader it goes to the head of the queue, everyone
- * else to the back.
- */
- if (lp != NULL) {
- if (ihold || LF_ISSET(DB_LOCK_UPGRADE) ||
- lock_mode == DB_LOCK_READ_UNCOMMITTED)
- action = HEAD;
- else
- action = TAIL;
- } else {
- if (LF_ISSET(DB_LOCK_SWITCH))
- action = TAIL;
- else if (LF_ISSET(DB_LOCK_UPGRADE))
- action = UPGRADE;
- else if (ihold)
- action = GRANT;
- else {
- /*
- * Look for conflicting waiters.
- */
- SH_TAILQ_FOREACH(
- lp, &sh_obj->waiters, links, __db_lock)
- if (CONFLICTS(lt, region, lp->mode,
- lock_mode) && sh_off != lp->holder)
- break;
-
- /*
- * If there are no conflicting holders or waiters,
- * then we grant. Normally when we wait, we
- * wait at the end (TAIL). However, the goal of
- * DIRTY_READ locks to allow forward progress in the
- * face of updating transactions, so we try to allow
- * all DIRTY_READ requests to proceed as rapidly
- * as possible, so long as we can prevent starvation.
- *
- * When determining how to queue a DIRTY_READ
- * request:
- *
- * 1. If there is a waiting upgrading writer,
- * then we enqueue the dirty reader BEHIND it
- * (second in the queue).
- * 2. Else, if the current holders are either
- * READ or WWRITE, we grant
- * 3. Else queue SECOND i.e., behind the first
- * waiter.
- *
- * The end result is that dirty_readers get to run
- * so long as other lockers are blocked. Once
- * there is a locker which is only waiting on
- * dirty readers then they queue up behind that
- * locker so that it gets to run. In general
- * this locker will be a WRITE which will shortly
- * get downgraded to a WWRITE, permitting the
- * DIRTY locks to be granted.
- */
- if (lp == NULL)
- action = GRANT;
- else if (grant_dirty &&
- lock_mode == DB_LOCK_READ_UNCOMMITTED) {
- /*
- * An upgrade will be at the head of the
- * queue.
- */
- lp = SH_TAILQ_FIRST(
- &sh_obj->waiters, __db_lock);
- if (lp->mode == DB_LOCK_WRITE &&
- lp->holder == holder)
- action = SECOND;
- else
- action = GRANT;
- } else if (lock_mode == DB_LOCK_READ_UNCOMMITTED)
- action = SECOND;
- else
- action = TAIL;
- }
- }
-
- switch (action) {
- case HEAD:
- case TAIL:
- case SECOND:
- case GRANT:
- part_id = LOCK_PART(region, ndx);
- /* Allocate a new lock. */
- if ((newl = SH_TAILQ_FIRST(
- &FREE_LOCKS(lt, part_id), __db_lock)) == NULL) {
- if ((ret = __lock_alloclock(lt, part_id)) != 0)
- goto err;
- /* Dropped the mutex start over. */
- goto again;
- }
- SH_TAILQ_REMOVE(
- &FREE_LOCKS(lt, part_id), newl, links, __db_lock);
-
-#ifdef HAVE_STATISTICS
- /*
- * Keep track of the maximum number of locks allocated
- * in each partition and the maximum number of locks
- * used by any one bucket.
- */
- if (++lt->obj_stat[ndx].st_nlocks >
- lt->obj_stat[ndx].st_maxnlocks)
- lt->obj_stat[ndx].st_maxnlocks =
- lt->obj_stat[ndx].st_nlocks;
- if (++lt->part_array[part_id].part_stat.st_nlocks >
- lt->part_array[part_id].part_stat.st_maxnlocks)
- lt->part_array[part_id].part_stat.st_maxnlocks =
- lt->part_array[part_id].part_stat.st_nlocks;
-#endif
-
- newl->holder = R_OFFSET(&lt->reginfo, sh_locker);
- newl->refcount = 1;
- newl->mode = lock_mode;
- newl->obj = (roff_t)SH_PTR_TO_OFF(newl, sh_obj);
- newl->indx = sh_obj->indx;
- /*
- * Now, insert the lock onto its locker's list.
- * If the locker does not currently hold any locks,
- * there's no reason to run a deadlock
- * detector, save that information.
- */
- no_dd = sh_locker->master_locker == INVALID_ROFF &&
- SH_LIST_FIRST(
- &sh_locker->child_locker, __db_locker) == NULL &&
- SH_LIST_FIRST(&sh_locker->heldby, __db_lock) == NULL;
-
- SH_LIST_INSERT_HEAD(
- &sh_locker->heldby, newl, locker_links, __db_lock);
-
- /*
- * Allocate a mutex if we do not have a mutex backing the lock.
- *
- * Use the lock mutex to block the thread; lock the mutex
- * when it is allocated so that we will block when we try
- * to lock it again. We will wake up when another thread
- * grants the lock and releases the mutex. We leave it
- * locked for the next use of this lock object.
- */
- if (newl->mtx_lock == MUTEX_INVALID) {
- if ((ret = __mutex_alloc(env, MTX_LOGICAL_LOCK,
- DB_MUTEX_LOGICAL_LOCK | DB_MUTEX_SELF_BLOCK,
- &newl->mtx_lock)) != 0)
- goto err;
- MUTEX_LOCK(env, newl->mtx_lock);
- }
- break;
-
- case UPGRADE:
-upgrade: lp = R_ADDR(&lt->reginfo, lock->off);
- if (IS_WRITELOCK(lock_mode) && !IS_WRITELOCK(lp->mode))
- sh_locker->nwrites++;
- lp->mode = lock_mode;
- goto done;
- }
-
- switch (action) {
- case UPGRADE:
- DB_ASSERT(env, 0);
- break;
- case GRANT:
- newl->status = DB_LSTAT_HELD;
- SH_TAILQ_INSERT_TAIL(&sh_obj->holders, newl, links);
- break;
- case HEAD:
- case TAIL:
- case SECOND:
- if (LF_ISSET(DB_LOCK_NOWAIT)) {
- ret = DB_LOCK_NOTGRANTED;
- STAT(region->stat.st_lock_nowait++);
- goto err;
- }
- if ((lp =
- SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock)) == NULL) {
- LOCK_DD(env, region);
- SH_TAILQ_INSERT_HEAD(&region->dd_objs,
- sh_obj, dd_links, __db_lockobj);
- UNLOCK_DD(env, region);
- }
- switch (action) {
- case HEAD:
- SH_TAILQ_INSERT_HEAD(
- &sh_obj->waiters, newl, links, __db_lock);
- break;
- case SECOND:
- SH_TAILQ_INSERT_AFTER(
- &sh_obj->waiters, lp, newl, links, __db_lock);
- break;
- case TAIL:
- SH_TAILQ_INSERT_TAIL(&sh_obj->waiters, newl, links);
- break;
- default:
- DB_ASSERT(env, 0);
- }
-
- /*
- * First check to see if this txn has expired.
- * If not then see if the lock timeout is past
- * the expiration of the txn, if it is, use
- * the txn expiration time. lk_expire is passed
- * to avoid an extra call to get the time.
- */
- if (__lock_expired(env,
- &sh_locker->lk_expire, &sh_locker->tx_expire)) {
- newl->status = DB_LSTAT_EXPIRED;
- sh_locker->lk_expire = sh_locker->tx_expire;
-
- /* We are done. */
- goto expired;
- }
-
- /*
- * If a timeout was specified in this call then it
- * takes priority. If a lock timeout has been specified
- * for this transaction then use that, otherwise use
- * the global timeout value.
- */
- if (!LF_ISSET(DB_LOCK_SET_TIMEOUT)) {
- if (F_ISSET(sh_locker, DB_LOCKER_TIMEOUT))
- timeout = sh_locker->lk_timeout;
- else
- timeout = region->lk_timeout;
- }
- if (timeout != 0)
- __lock_expires(env, &sh_locker->lk_expire, timeout);
- else
- timespecclear(&sh_locker->lk_expire);
-
- if (timespecisset(&sh_locker->tx_expire) &&
- (timeout == 0 || __lock_expired(env,
- &sh_locker->lk_expire, &sh_locker->tx_expire)))
- sh_locker->lk_expire = sh_locker->tx_expire;
- if (timespecisset(&sh_locker->lk_expire) &&
- (!timespecisset(&region->next_timeout) ||
- timespeccmp(
- &region->next_timeout, &sh_locker->lk_expire, >)))
- region->next_timeout = sh_locker->lk_expire;
-
-in_abort: newl->status = DB_LSTAT_WAITING;
- STAT(lt->obj_stat[ndx].st_lock_wait++);
- /* We are about to block, deadlock detector must run. */
- region->need_dd = 1;
-
- OBJECT_UNLOCK(lt, region, sh_obj->indx);
-
- /* If we are switching drop the lock we had. */
- if (LF_ISSET(DB_LOCK_SWITCH) &&
- (ret = __lock_put_nolock(env,
- lock, &ihold, DB_LOCK_NOWAITERS)) != 0) {
- OBJECT_LOCK_NDX(lt, region, sh_obj->indx);
- (void)__lock_remove_waiter(
- lt, sh_obj, newl, DB_LSTAT_FREE);
- goto err;
- }
-
- LOCK_SYSTEM_UNLOCK(lt, region);
-
- /*
- * Before waiting, see if the deadlock detector should run.
- */
- if (region->detect != DB_LOCK_NORUN && !no_dd)
- (void)__lock_detect(env, region->detect, &did_abort);
-
- ip = NULL;
- if (env->thr_hashtab != NULL &&
- (ret = __env_set_state(env, &ip, THREAD_BLOCKED)) != 0) {
- LOCK_SYSTEM_LOCK(lt, region);
- OBJECT_LOCK_NDX(lt, region, ndx);
- goto err;
- }
-
- MUTEX_LOCK(env, newl->mtx_lock);
- if (ip != NULL)
- ip->dbth_state = THREAD_ACTIVE;
-
- LOCK_SYSTEM_LOCK(lt, region);
- OBJECT_LOCK_NDX(lt, region, ndx);
-
- /* Turn off lock timeout. */
- if (newl->status != DB_LSTAT_EXPIRED)
- timespecclear(&sh_locker->lk_expire);
-
- switch (newl->status) {
- case DB_LSTAT_ABORTED:
- /*
- * If we raced with the deadlock detector and it
- * mistakenly picked this tranaction to abort again
- * ignore the abort and request the lock again.
- */
- if (F_ISSET(sh_locker, DB_LOCKER_INABORT))
- goto in_abort;
- ret = DB_LOCK_DEADLOCK;
- goto err;
- case DB_LSTAT_EXPIRED:
-expired: ret = __lock_put_internal(lt, newl,
- ndx, DB_LOCK_UNLINK | DB_LOCK_FREE);
- newl = NULL;
- if (ret != 0)
- goto err;
-#ifdef HAVE_STATISTICS
- if (timespeccmp(
- &sh_locker->lk_expire, &sh_locker->tx_expire, ==))
- lt->obj_stat[ndx].st_ntxntimeouts++;
- else
- lt->obj_stat[ndx].st_nlocktimeouts++;
-#endif
- ret = DB_LOCK_NOTGRANTED;
- timespecclear(&sh_locker->lk_expire);
- goto err;
- case DB_LSTAT_PENDING:
- if (LF_ISSET(DB_LOCK_UPGRADE)) {
- /*
- * The lock just granted got put on the holders
- * list. Since we're upgrading some other lock,
- * we've got to remove it here.
- */
- SH_TAILQ_REMOVE(
- &sh_obj->holders, newl, links, __db_lock);
- /*
- * Ensure the object is not believed to be on
- * the object's lists, if we're traversing by
- * locker.
- */
- newl->links.stqe_prev = -1;
- goto upgrade;
- } else
- newl->status = DB_LSTAT_HELD;
- break;
- case DB_LSTAT_FREE:
- case DB_LSTAT_HELD:
- case DB_LSTAT_WAITING:
- default:
- __db_errx(env,
- "Unexpected lock status: %d", (int)newl->status);
- ret = __env_panic(env, EINVAL);
- goto err;
- }
- }
-
- lock->off = R_OFFSET(&lt->reginfo, newl);
- lock->gen = newl->gen;
- lock->mode = newl->mode;
- sh_locker->nlocks++;
- if (IS_WRITELOCK(newl->mode)) {
- sh_locker->nwrites++;
- if (newl->mode == DB_LOCK_WWRITE)
- F_SET(sh_locker, DB_LOCKER_DIRTY);
- }
-
- OBJECT_UNLOCK(lt, region, ndx);
- return (0);
-
-err: if (!LF_ISSET(DB_LOCK_UPGRADE | DB_LOCK_SWITCH))
- LOCK_INIT(*lock);
-
-done: if (newl != NULL &&
- (t_ret = __lock_freelock(lt, newl, sh_locker,
- DB_LOCK_FREE | DB_LOCK_UNLINK)) != 0 && ret == 0)
- ret = t_ret;
- OBJECT_UNLOCK(lt, region, ndx);
-
- return (ret);
-}
-
-/*
- * __lock_put_pp --
- * ENV->lock_put pre/post processing.
- *
- * PUBLIC: int __lock_put_pp __P((DB_ENV *, DB_LOCK *));
- */
-int
-__lock_put_pp(dbenv, lock)
- DB_ENV *dbenv;
- DB_LOCK *lock;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_LOCK->lock_put", DB_INIT_LOCK);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env, (__lock_put(env, lock)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_put --
- *
- * PUBLIC: int __lock_put __P((ENV *, DB_LOCK *));
- * Internal lock_put interface.
- */
-int
-__lock_put(env, lock)
- ENV *env;
- DB_LOCK *lock;
-{
- DB_LOCKTAB *lt;
- int ret, run_dd;
-
- if (IS_RECOVERING(env))
- return (0);
-
- lt = env->lk_handle;
-
- LOCK_SYSTEM_LOCK(lt, (DB_LOCKREGION *)lt->reginfo.primary);
- ret = __lock_put_nolock(env, lock, &run_dd, 0);
- LOCK_SYSTEM_UNLOCK(lt, (DB_LOCKREGION *)lt->reginfo.primary);
-
- /*
- * Only run the lock detector if put told us to AND we are running
- * in auto-detect mode. If we are not running in auto-detect, then
- * a call to lock_detect here will 0 the need_dd bit, but will not
- * actually abort anything.
- */
- if (ret == 0 && run_dd)
- (void)__lock_detect(env,
- ((DB_LOCKREGION *)lt->reginfo.primary)->detect, NULL);
- return (ret);
-}
-
-static int
-__lock_put_nolock(env, lock, runp, flags)
- ENV *env;
- DB_LOCK *lock;
- int *runp;
- u_int32_t flags;
-{
- struct __db_lock *lockp;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- int ret;
-
- /* Check if locks have been globally turned off. */
- if (F_ISSET(env->dbenv, DB_ENV_NOLOCKING))
- return (0);
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
-
- lockp = R_ADDR(&lt->reginfo, lock->off);
- if (lock->gen != lockp->gen) {
- __db_errx(env, __db_lock_invalid, "DB_LOCK->lock_put");
- LOCK_INIT(*lock);
- return (EINVAL);
- }
-
- OBJECT_LOCK_NDX(lt, region, lock->ndx);
- ret = __lock_put_internal(lt,
- lockp, lock->ndx, flags | DB_LOCK_UNLINK | DB_LOCK_FREE);
- OBJECT_UNLOCK(lt, region, lock->ndx);
-
- LOCK_INIT(*lock);
-
- *runp = 0;
- if (ret == 0 && region->detect != DB_LOCK_NORUN &&
- (region->need_dd || timespecisset(&region->next_timeout)))
- *runp = 1;
-
- return (ret);
-}
-
-/*
- * __lock_downgrade --
- *
- * Used to downgrade locks. Currently this is used in three places: 1) by the
- * Concurrent Data Store product to downgrade write locks back to iwrite locks
- * and 2) to downgrade write-handle locks to read-handle locks at the end of
- * an open/create. 3) To downgrade write locks to was_write to support dirty
- * reads.
- *
- * PUBLIC: int __lock_downgrade __P((ENV *,
- * PUBLIC: DB_LOCK *, db_lockmode_t, u_int32_t));
- */
-int
-__lock_downgrade(env, lock, new_mode, flags)
- ENV *env;
- DB_LOCK *lock;
- db_lockmode_t new_mode;
- u_int32_t flags;
-{
- struct __db_lock *lockp;
- DB_LOCKER *sh_locker;
- DB_LOCKOBJ *obj;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- int ret;
-
- ret = 0;
-
- /* Check if locks have been globally turned off. */
- if (F_ISSET(env->dbenv, DB_ENV_NOLOCKING))
- return (0);
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
-
- LOCK_SYSTEM_LOCK(lt, region);
-
- lockp = R_ADDR(&lt->reginfo, lock->off);
- if (lock->gen != lockp->gen) {
- __db_errx(env, __db_lock_invalid, "lock_downgrade");
- ret = EINVAL;
- goto out;
- }
-
- sh_locker = R_ADDR(&lt->reginfo, lockp->holder);
-
- if (IS_WRITELOCK(lockp->mode) && !IS_WRITELOCK(new_mode))
- sh_locker->nwrites--;
-
- lockp->mode = new_mode;
- lock->mode = new_mode;
-
- /* Get the object associated with this lock. */
- obj = (DB_LOCKOBJ *)((u_int8_t *)lockp + lockp->obj);
- OBJECT_LOCK_NDX(lt, region, obj->indx);
- STAT(lt->obj_stat[obj->indx].st_ndowngrade++);
- ret = __lock_promote(lt, obj, NULL, LF_ISSET(DB_LOCK_NOWAITERS));
- OBJECT_UNLOCK(lt, region, obj->indx);
-
-out: LOCK_SYSTEM_UNLOCK(lt, region);
- return (ret);
-}
-
-/*
- * __lock_put_internal -- put a lock structure
- * We assume that we are called with the proper object locked.
- */
-static int
-__lock_put_internal(lt, lockp, obj_ndx, flags)
- DB_LOCKTAB *lt;
- struct __db_lock *lockp;
- u_int32_t obj_ndx, flags;
-{
- DB_LOCKOBJ *sh_obj;
- DB_LOCKREGION *region;
- ENV *env;
- u_int32_t part_id;
- int ret, state_changed;
-
- COMPQUIET(env, NULL);
- env = lt->env;
- region = lt->reginfo.primary;
- ret = state_changed = 0;
-
- if (!OBJ_LINKS_VALID(lockp)) {
- /*
- * Someone removed this lock while we were doing a release
- * by locker id. We are trying to free this lock, but it's
- * already been done; all we need to do is return it to the
- * free list.
- */
- (void)__lock_freelock(lt, lockp, NULL, DB_LOCK_FREE);
- return (0);
- }
-
-#ifdef HAVE_STATISTICS
- if (LF_ISSET(DB_LOCK_DOALL))
- lt->obj_stat[obj_ndx].st_nreleases += lockp->refcount;
- else
- lt->obj_stat[obj_ndx].st_nreleases++;
-#endif
-
- if (!LF_ISSET(DB_LOCK_DOALL) && lockp->refcount > 1) {
- lockp->refcount--;
- return (0);
- }
-
- /* Increment generation number. */
- lockp->gen++;
-
- /* Get the object associated with this lock. */
- sh_obj = (DB_LOCKOBJ *)((u_int8_t *)lockp + lockp->obj);
-
- /*
- * Remove this lock from its holders/waitlist. Set its status
- * to ABORTED. It may get freed below, but if not then the
- * waiter has been aborted (it will panic if the lock is
- * free).
- */
- if (lockp->status != DB_LSTAT_HELD &&
- lockp->status != DB_LSTAT_PENDING) {
- if ((ret = __lock_remove_waiter(
- lt, sh_obj, lockp, DB_LSTAT_ABORTED)) != 0)
- return (ret);
- } else {
- SH_TAILQ_REMOVE(&sh_obj->holders, lockp, links, __db_lock);
- lockp->links.stqe_prev = -1;
- }
-
- if (LF_ISSET(DB_LOCK_NOPROMOTE))
- state_changed = 0;
- else
- if ((ret = __lock_promote(lt, sh_obj, &state_changed,
- LF_ISSET(DB_LOCK_NOWAITERS))) != 0)
- return (ret);
-
- /* Check if object should be reclaimed. */
- if (SH_TAILQ_FIRST(&sh_obj->holders, __db_lock) == NULL &&
- SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock) == NULL) {
- part_id = LOCK_PART(region, obj_ndx);
- SH_TAILQ_REMOVE(
- &lt->obj_tab[obj_ndx], sh_obj, links, __db_lockobj);
- if (sh_obj->lockobj.size > sizeof(sh_obj->objdata)) {
- if (region->part_t_size != 1)
- LOCK_REGION_LOCK(env);
- __env_alloc_free(&lt->reginfo,
- SH_DBT_PTR(&sh_obj->lockobj));
- if (region->part_t_size != 1)
- LOCK_REGION_UNLOCK(env);
- }
- SH_TAILQ_INSERT_HEAD(
- &FREE_OBJS(lt, part_id), sh_obj, links, __db_lockobj);
- sh_obj->generation++;
- STAT(lt->part_array[part_id].part_stat.st_nobjects--);
- STAT(lt->obj_stat[obj_ndx].st_nobjects--);
- state_changed = 1;
- }
-
- /* Free lock. */
- if (LF_ISSET(DB_LOCK_UNLINK | DB_LOCK_FREE))
- ret = __lock_freelock(lt, lockp,
- R_ADDR(&lt->reginfo, lockp->holder), flags);
-
- /*
- * If we did not promote anyone; we need to run the deadlock
- * detector again.
- */
- if (state_changed == 0)
- region->need_dd = 1;
-
- return (ret);
-}
-
-/*
- * __lock_freelock --
- * Free a lock. Unlink it from its locker if necessary.
- * We must hold the object lock.
- *
- */
-static int
-__lock_freelock(lt, lockp, sh_locker, flags)
- DB_LOCKTAB *lt;
- struct __db_lock *lockp;
- DB_LOCKER *sh_locker;
- u_int32_t flags;
-{
- DB_LOCKREGION *region;
- ENV *env;
- u_int32_t part_id;
- int ret;
-
- env = lt->env;
- region = lt->reginfo.primary;
-
- if (LF_ISSET(DB_LOCK_UNLINK)) {
- SH_LIST_REMOVE(lockp, locker_links, __db_lock);
- if (lockp->status == DB_LSTAT_HELD) {
- sh_locker->nlocks--;
- if (IS_WRITELOCK(lockp->mode))
- sh_locker->nwrites--;
- }
- }
-
- if (LF_ISSET(DB_LOCK_FREE)) {
- /*
- * If the lock is not held we cannot be sure of its mutex
- * state so we just destroy it and let it be re-created
- * when needed.
- */
- part_id = LOCK_PART(region, lockp->indx);
- if (lockp->mtx_lock != MUTEX_INVALID &&
- lockp->status != DB_LSTAT_HELD &&
- lockp->status != DB_LSTAT_EXPIRED &&
- (ret = __mutex_free(env, &lockp->mtx_lock)) != 0)
- return (ret);
- lockp->status = DB_LSTAT_FREE;
- SH_TAILQ_INSERT_HEAD(&FREE_LOCKS(lt, part_id),
- lockp, links, __db_lock);
- STAT(lt->part_array[part_id].part_stat.st_nlocks--);
- STAT(lt->obj_stat[lockp->indx].st_nlocks--);
- }
-
- return (0);
-}
-
-/*
- * __lock_allocobj -- allocate a object from another partition.
- * We assume we have the partition locked on entry and leave
- * with the same partition locked on exit.
- */
-static int
-__lock_allocobj(lt, part_id)
- DB_LOCKTAB *lt;
- u_int32_t part_id;
-{
- DB_LOCKOBJ *sh_obj;
- DB_LOCKPART *end_p, *cur_p;
- DB_LOCKREGION *region;
- int begin;
-
- region = lt->reginfo.primary;
-
- if (region->part_t_size == 1)
- goto err;
-
- begin = 0;
- sh_obj = NULL;
- cur_p = &lt->part_array[part_id];
- MUTEX_UNLOCK(lt->env, cur_p->mtx_part);
- end_p = &lt->part_array[region->part_t_size];
- /*
- * Start looking at the next partition and wrap around. If
- * we get back to our partition then raise an error.
- */
-again: for (cur_p++; sh_obj == NULL && cur_p < end_p; cur_p++) {
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
- if ((sh_obj =
- SH_TAILQ_FIRST(&cur_p->free_objs, __db_lockobj)) != NULL)
- SH_TAILQ_REMOVE(&cur_p->free_objs,
- sh_obj, links, __db_lockobj);
- MUTEX_UNLOCK(lt->env, cur_p->mtx_part);
- }
- if (sh_obj != NULL) {
- cur_p = &lt->part_array[part_id];
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
- SH_TAILQ_INSERT_HEAD(&cur_p->free_objs,
- sh_obj, links, __db_lockobj);
- STAT(cur_p->part_stat.st_objectsteals++);
- return (0);
- }
- if (!begin) {
- begin = 1;
- cur_p = lt->part_array;
- end_p = &lt->part_array[part_id];
- goto again;
- }
- MUTEX_LOCK(lt->env, cur_p->mtx_part);
-
-err: return (__lock_nomem(lt->env, "object entries"));
-}
-/*
- * __lock_getobj --
- * Get an object in the object hash table. The create parameter
- * indicates if the object should be created if it doesn't exist in
- * the table.
- *
- * This must be called with the object bucket locked.
- */
-static int
-__lock_getobj(lt, obj, ndx, create, retp)
- DB_LOCKTAB *lt;
- const DBT *obj;
- u_int32_t ndx;
- int create;
- DB_LOCKOBJ **retp;
-{
- DB_LOCKOBJ *sh_obj;
- DB_LOCKREGION *region;
- ENV *env;
- int ret;
- void *p;
- u_int32_t len, part_id;
-
- env = lt->env;
- region = lt->reginfo.primary;
- len = 0;
-
- /* Look up the object in the hash table. */
-retry: SH_TAILQ_FOREACH(sh_obj, &lt->obj_tab[ndx], links, __db_lockobj) {
- len++;
- if (obj->size == sh_obj->lockobj.size &&
- memcmp(obj->data,
- SH_DBT_PTR(&sh_obj->lockobj), obj->size) == 0)
- break;
- }
-
- /*
- * If we found the object, then we can just return it. If
- * we didn't find the object, then we need to create it.
- */
- if (sh_obj == NULL && create) {
- /* Create new object and then insert it into hash table. */
- part_id = LOCK_PART(region, ndx);
- if ((sh_obj = SH_TAILQ_FIRST(&FREE_OBJS(
- lt, part_id), __db_lockobj)) == NULL) {
- if ((ret = __lock_allocobj(lt, part_id)) == 0)
- goto retry;
- goto err;
- }
-
- /*
- * If we can fit this object in the structure, do so instead
- * of alloc-ing space for it.
- */
- if (obj->size <= sizeof(sh_obj->objdata))
- p = sh_obj->objdata;
- else {
- /*
- * If we have only one partition, the region is locked.
- */
- if (region->part_t_size != 1)
- LOCK_REGION_LOCK(env);
- if ((ret =
- __env_alloc(&lt->reginfo, obj->size, &p)) != 0) {
- __db_errx(env,
- "No space for lock object storage");
- if (region->part_t_size != 1)
- LOCK_REGION_UNLOCK(env);
- goto err;
- }
- if (region->part_t_size != 1)
- LOCK_REGION_UNLOCK(env);
- }
-
- memcpy(p, obj->data, obj->size);
-
- SH_TAILQ_REMOVE(&FREE_OBJS(
- lt, part_id), sh_obj, links, __db_lockobj);
-#ifdef HAVE_STATISTICS
- /*
- * Keep track of both the max number of objects allocated
- * per partition and the max number of objects used by
- * this bucket.
- */
- len++;
- if (++lt->obj_stat[ndx].st_nobjects >
- lt->obj_stat[ndx].st_maxnobjects)
- lt->obj_stat[ndx].st_maxnobjects =
- lt->obj_stat[ndx].st_nobjects;
- if (++lt->part_array[part_id].part_stat.st_nobjects >
- lt->part_array[part_id].part_stat.st_maxnobjects)
- lt->part_array[part_id].part_stat.st_maxnobjects =
- lt->part_array[part_id].part_stat.st_nobjects;
-#endif
-
- sh_obj->indx = ndx;
- SH_TAILQ_INIT(&sh_obj->waiters);
- SH_TAILQ_INIT(&sh_obj->holders);
- sh_obj->lockobj.size = obj->size;
- sh_obj->lockobj.off =
- (roff_t)SH_PTR_TO_OFF(&sh_obj->lockobj, p);
- SH_TAILQ_INSERT_HEAD(
- &lt->obj_tab[ndx], sh_obj, links, __db_lockobj);
- }
-
-#ifdef HAVE_STATISTICS
- if (len > lt->obj_stat[ndx].st_hash_len)
- lt->obj_stat[ndx].st_hash_len = len;
-#endif
-
-#ifdef HAVE_STATISTICS
- if (len > lt->obj_stat[ndx].st_hash_len)
- lt->obj_stat[ndx].st_hash_len = len;
-#endif
-
- *retp = sh_obj;
- return (0);
-
-err: return (ret);
-}
-
-/*
- * __lock_is_parent --
- * Given a locker and a transaction, return 1 if the locker is
- * an ancestor of the designated transaction. This is used to determine
- * if we should grant locks that appear to conflict, but don't because
- * the lock is already held by an ancestor.
- */
-static int
-__lock_is_parent(lt, l_off, sh_locker)
- DB_LOCKTAB *lt;
- roff_t l_off;
- DB_LOCKER *sh_locker;
-{
- DB_LOCKER *parent;
-
- parent = sh_locker;
- while (parent->parent_locker != INVALID_ROFF) {
- if (parent->parent_locker == l_off)
- return (1);
- parent = R_ADDR(&lt->reginfo, parent->parent_locker);
- }
-
- return (0);
-}
-
-/*
- * __lock_locker_is_parent --
- * Determine if "locker" is an ancestor of "child".
- * *retp == 1 if so, 0 otherwise.
- *
- * PUBLIC: int __lock_locker_is_parent
- * PUBLIC: __P((ENV *, DB_LOCKER *, DB_LOCKER *, int *));
- */
-int
-__lock_locker_is_parent(env, locker, child, retp)
- ENV *env;
- DB_LOCKER *locker;
- DB_LOCKER *child;
- int *retp;
-{
- DB_LOCKTAB *lt;
-
- lt = env->lk_handle;
-
- /*
- * The locker may not exist for this transaction, if not then it has
- * no parents.
- */
- if (locker == NULL)
- *retp = 0;
- else
- *retp = __lock_is_parent(lt,
- R_OFFSET(&lt->reginfo, locker), child);
- return (0);
-}
-
-/*
- * __lock_inherit_locks --
- * Called on child commit to merge child's locks with parent's.
- */
-static int
-__lock_inherit_locks(lt, sh_locker, flags)
- DB_LOCKTAB *lt;
- DB_LOCKER *sh_locker;
- u_int32_t flags;
-{
- DB_LOCKER *sh_parent;
- DB_LOCKOBJ *obj;
- DB_LOCKREGION *region;
- ENV *env;
- int ret;
- struct __db_lock *hlp, *lp;
- roff_t poff;
-
- env = lt->env;
- region = lt->reginfo.primary;
-
- /*
- * Get the committing locker and mark it as deleted.
- * This allows us to traverse the locker links without
- * worrying that someone else is deleting locks out
- * from under us. However, if the locker doesn't
- * exist, that just means that the child holds no
- * locks, so inheritance is easy!
- */
- if (sh_locker == NULL) {
- __db_errx(env, __db_locker_invalid);
- return (EINVAL);
- }
-
- /* Make sure we are a child transaction. */
- if (sh_locker->parent_locker == INVALID_ROFF) {
- __db_errx(env, "Not a child transaction");
- return (EINVAL);
- }
- sh_parent = R_ADDR(&lt->reginfo, sh_locker->parent_locker);
-
- /*
- * In order to make it possible for a parent to have
- * many, many children who lock the same objects, and
- * not require an inordinate number of locks, we try
- * to merge the child's locks with its parent's.
- */
- poff = R_OFFSET(&lt->reginfo, sh_parent);
- for (lp = SH_LIST_FIRST(&sh_locker->heldby, __db_lock);
- lp != NULL;
- lp = SH_LIST_FIRST(&sh_locker->heldby, __db_lock)) {
- SH_LIST_REMOVE(lp, locker_links, __db_lock);
-
- /* See if the parent already has a lock. */
- obj = (DB_LOCKOBJ *)((u_int8_t *)lp + lp->obj);
- OBJECT_LOCK_NDX(lt, region, obj->indx);
- SH_TAILQ_FOREACH(hlp, &obj->holders, links, __db_lock)
- if (hlp->holder == poff && lp->mode == hlp->mode)
- break;
-
- if (hlp != NULL) {
- /* Parent already holds lock. */
- hlp->refcount += lp->refcount;
-
- /* Remove lock from object list and free it. */
- DB_ASSERT(env, lp->status == DB_LSTAT_HELD);
- SH_TAILQ_REMOVE(&obj->holders, lp, links, __db_lock);
- (void)__lock_freelock(lt, lp, sh_locker, DB_LOCK_FREE);
- } else {
- /* Just move lock to parent chains. */
- SH_LIST_INSERT_HEAD(&sh_parent->heldby,
- lp, locker_links, __db_lock);
- lp->holder = poff;
- }
-
- /*
- * We may need to promote regardless of whether we simply
- * moved the lock to the parent or changed the parent's
- * reference count, because there might be a sibling waiting,
- * who will now be allowed to make forward progress.
- */
- ret =
- __lock_promote(lt, obj, NULL, LF_ISSET(DB_LOCK_NOWAITERS));
- OBJECT_UNLOCK(lt, region, obj->indx);
- if (ret != 0)
- return (ret);
- }
-
- /* Transfer child counts to parent. */
- sh_parent->nlocks += sh_locker->nlocks;
- sh_parent->nwrites += sh_locker->nwrites;
-
- return (0);
-}
-
-/*
- * __lock_promote --
- *
- * Look through the waiters and holders lists and decide which (if any)
- * locks can be promoted. Promote any that are eligible.
- *
- * PUBLIC: int __lock_promote
- * PUBLIC: __P((DB_LOCKTAB *, DB_LOCKOBJ *, int *, u_int32_t));
- */
-int
-__lock_promote(lt, obj, state_changedp, flags)
- DB_LOCKTAB *lt;
- DB_LOCKOBJ *obj;
- int *state_changedp;
- u_int32_t flags;
-{
- struct __db_lock *lp_w, *lp_h, *next_waiter;
- DB_LOCKREGION *region;
- int had_waiters, state_changed;
-
- region = lt->reginfo.primary;
- had_waiters = 0;
-
- /*
- * We need to do lock promotion. We also need to determine if we're
- * going to need to run the deadlock detector again. If we release
- * locks, and there are waiters, but no one gets promoted, then we
- * haven't fundamentally changed the lockmgr state, so we may still
- * have a deadlock and we have to run again. However, if there were
- * no waiters, or we actually promoted someone, then we are OK and we
- * don't have to run it immediately.
- *
- * During promotion, we look for state changes so we can return this
- * information to the caller.
- */
-
- for (lp_w = SH_TAILQ_FIRST(&obj->waiters, __db_lock),
- state_changed = lp_w == NULL;
- lp_w != NULL;
- lp_w = next_waiter) {
- had_waiters = 1;
- next_waiter = SH_TAILQ_NEXT(lp_w, links, __db_lock);
-
- /* Waiter may have aborted or expired. */
- if (lp_w->status != DB_LSTAT_WAITING)
- continue;
- /* Are we switching locks? */
- if (LF_ISSET(DB_LOCK_NOWAITERS) && lp_w->mode == DB_LOCK_WAIT)
- continue;
-
- SH_TAILQ_FOREACH(lp_h, &obj->holders, links, __db_lock) {
- if (lp_h->holder != lp_w->holder &&
- CONFLICTS(lt, region, lp_h->mode, lp_w->mode)) {
- if (!__lock_is_parent(lt, lp_h->holder,
- R_ADDR(&lt->reginfo, lp_w->holder)))
- break;
- }
- }
- if (lp_h != NULL) /* Found a conflict. */
- break;
-
- /* No conflict, promote the waiting lock. */
- SH_TAILQ_REMOVE(&obj->waiters, lp_w, links, __db_lock);
- lp_w->status = DB_LSTAT_PENDING;
- SH_TAILQ_INSERT_TAIL(&obj->holders, lp_w, links);
-
- /* Wake up waiter. */
- MUTEX_UNLOCK(lt->env, lp_w->mtx_lock);
- state_changed = 1;
- }
-
- /*
- * If this object had waiters and doesn't any more, then we need
- * to remove it from the dd_obj list.
- */
- if (had_waiters && SH_TAILQ_FIRST(&obj->waiters, __db_lock) == NULL) {
- LOCK_DD(lt->env, region);
- /*
- * Bump the generation when removing an object from the
- * queue so that the deadlock detector will retry.
- */
- obj->generation++;
- SH_TAILQ_REMOVE(&region->dd_objs, obj, dd_links, __db_lockobj);
- UNLOCK_DD(lt->env, region);
- }
-
- if (state_changedp != NULL)
- *state_changedp = state_changed;
-
- return (0);
-}
-
-/*
- * __lock_remove_waiter --
- * Any lock on the waitlist has a process waiting for it. Therefore,
- * we can't return the lock to the freelist immediately. Instead, we can
- * remove the lock from the list of waiters, set the status field of the
- * lock, and then let the process waking up return the lock to the
- * free list.
- *
- * This must be called with the Object bucket locked.
- */
-static int
-__lock_remove_waiter(lt, sh_obj, lockp, status)
- DB_LOCKTAB *lt;
- DB_LOCKOBJ *sh_obj;
- struct __db_lock *lockp;
- db_status_t status;
-{
- DB_LOCKREGION *region;
- int do_wakeup;
-
- region = lt->reginfo.primary;
-
- do_wakeup = lockp->status == DB_LSTAT_WAITING;
-
- SH_TAILQ_REMOVE(&sh_obj->waiters, lockp, links, __db_lock);
- lockp->links.stqe_prev = -1;
- lockp->status = status;
- if (SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock) == NULL) {
- LOCK_DD(lt->env, region);
- sh_obj->generation++;
- SH_TAILQ_REMOVE(
- &region->dd_objs,
- sh_obj, dd_links, __db_lockobj);
- UNLOCK_DD(lt->env, region);
- }
-
- /*
- * Wake whoever is waiting on this lock.
- */
- if (do_wakeup)
- MUTEX_UNLOCK(lt->env, lockp->mtx_lock);
-
- return (0);
-}
-
-/*
- * __lock_trade --
- *
- * Trade locker ids on a lock. This is used to reassign file locks from
- * a transactional locker id to a long-lived locker id. This should be
- * called with the region mutex held.
- */
-static int
-__lock_trade(env, lock, new_locker)
- ENV *env;
- DB_LOCK *lock;
- DB_LOCKER *new_locker;
-{
- struct __db_lock *lp;
- DB_LOCKTAB *lt;
- int ret;
-
- lt = env->lk_handle;
- lp = R_ADDR(&lt->reginfo, lock->off);
-
- /* If the lock is already released, simply return. */
- if (lp->gen != lock->gen)
- return (DB_NOTFOUND);
-
- if (new_locker == NULL) {
- __db_errx(env, "Locker does not exist");
- return (EINVAL);
- }
-
- /* Remove the lock from its current locker. */
- if ((ret = __lock_freelock(lt,
- lp, R_ADDR(&lt->reginfo, lp->holder), DB_LOCK_UNLINK)) != 0)
- return (ret);
-
- /* Add lock to its new locker. */
- SH_LIST_INSERT_HEAD(&new_locker->heldby, lp, locker_links, __db_lock);
- new_locker->nlocks++;
- if (IS_WRITELOCK(lp->mode))
- new_locker->nwrites++;
- lp->holder = R_OFFSET(&lt->reginfo, new_locker);
-
- return (0);
-}
diff --git a/lock/lock_deadlock.c b/lock/lock_deadlock.c
deleted file mode 100644
index 0898c20..0000000
--- a/lock/lock_deadlock.c
+++ /dev/null
@@ -1,1045 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-#include "dbinc/log.h"
-#include "dbinc/txn.h"
-
-#define ISSET_MAP(M, N) ((M)[(N) / 32] & (1 << ((N) % 32)))
-
-#define CLEAR_MAP(M, N) { \
- u_int32_t __i; \
- for (__i = 0; __i < (N); __i++) \
- (M)[__i] = 0; \
-}
-
-#define SET_MAP(M, B) ((M)[(B) / 32] |= (1 << ((B) % 32)))
-#define CLR_MAP(M, B) ((M)[(B) / 32] &= ~((u_int)1 << ((B) % 32)))
-
-#define OR_MAP(D, S, N) { \
- u_int32_t __i; \
- for (__i = 0; __i < (N); __i++) \
- D[__i] |= S[__i]; \
-}
-#define BAD_KILLID 0xffffffff
-
-typedef struct {
- int valid;
- int self_wait;
- int in_abort;
- u_int32_t count;
- u_int32_t id;
- roff_t last_lock;
- roff_t last_obj;
- u_int32_t last_ndx;
- u_int32_t last_locker_id;
- db_pgno_t pgno;
-} locker_info;
-
-static int __dd_abort __P((ENV *, locker_info *, int *));
-static int __dd_build __P((ENV *, u_int32_t,
- u_int32_t **, u_int32_t *, u_int32_t *, locker_info **, int*));
-static int __dd_find __P((ENV *,
- u_int32_t *, locker_info *, u_int32_t, u_int32_t, u_int32_t ***));
-static int __dd_isolder __P((u_int32_t, u_int32_t, u_int32_t, u_int32_t));
-static int __dd_verify __P((locker_info *, u_int32_t *, u_int32_t *,
- u_int32_t *, u_int32_t, u_int32_t, u_int32_t));
-
-#ifdef DIAGNOSTIC
-static void __dd_debug
- __P((ENV *, locker_info *, u_int32_t *, u_int32_t, u_int32_t));
-#endif
-
-/*
- * __lock_detect_pp --
- * ENV->lock_detect pre/post processing.
- *
- * PUBLIC: int __lock_detect_pp __P((DB_ENV *, u_int32_t, u_int32_t, int *));
- */
-int
-__lock_detect_pp(dbenv, flags, atype, rejectp)
- DB_ENV *dbenv;
- u_int32_t flags, atype;
- int *rejectp;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_detect", DB_INIT_LOCK);
-
- /* Validate arguments. */
- if ((ret = __db_fchk(env, "DB_ENV->lock_detect", flags, 0)) != 0)
- return (ret);
- switch (atype) {
- case DB_LOCK_DEFAULT:
- case DB_LOCK_EXPIRE:
- case DB_LOCK_MAXLOCKS:
- case DB_LOCK_MAXWRITE:
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MINWRITE:
- case DB_LOCK_OLDEST:
- case DB_LOCK_RANDOM:
- case DB_LOCK_YOUNGEST:
- break;
- default:
- __db_errx(env,
- "DB_ENV->lock_detect: unknown deadlock detection mode specified");
- return (EINVAL);
- }
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env, (__lock_detect(env, atype, rejectp)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_detect --
- * ENV->lock_detect.
- *
- * PUBLIC: int __lock_detect __P((ENV *, u_int32_t, int *));
- */
-int
-__lock_detect(env, atype, rejectp)
- ENV *env;
- u_int32_t atype;
- int *rejectp;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- db_timespec now;
- locker_info *idmap;
- u_int32_t *bitmap, *copymap, **deadp, **deadlist, *tmpmap;
- u_int32_t i, cid, keeper, killid, limit, nalloc, nlockers;
- u_int32_t lock_max, txn_max;
- int ret, status;
-
- /*
- * If this environment is a replication client, then we must use the
- * MINWRITE detection discipline.
- */
- if (IS_REP_CLIENT(env))
- atype = DB_LOCK_MINWRITE;
-
- copymap = tmpmap = NULL;
- deadlist = NULL;
-
- lt = env->lk_handle;
- if (rejectp != NULL)
- *rejectp = 0;
-
- /* Check if a detector run is necessary. */
-
- /* Make a pass only if auto-detect would run. */
- region = lt->reginfo.primary;
-
- timespecclear(&now);
- if (region->need_dd == 0 &&
- (!timespecisset(&region->next_timeout) ||
- !__lock_expired(env, &now, &region->next_timeout))) {
- return (0);
- }
- if (region->need_dd == 0)
- atype = DB_LOCK_EXPIRE;
-
- /* Reset need_dd, so we know we've run the detector. */
- region->need_dd = 0;
-
- /* Build the waits-for bitmap. */
- ret = __dd_build(env,
- atype, &bitmap, &nlockers, &nalloc, &idmap, rejectp);
- lock_max = region->stat.st_cur_maxid;
- if (ret != 0 || atype == DB_LOCK_EXPIRE)
- return (ret);
-
- /* If there are no lockers, there are no deadlocks. */
- if (nlockers == 0)
- return (0);
-
-#ifdef DIAGNOSTIC
- if (FLD_ISSET(env->dbenv->verbose, DB_VERB_WAITSFOR))
- __dd_debug(env, idmap, bitmap, nlockers, nalloc);
-#endif
-
- /* Now duplicate the bitmaps so we can verify deadlock participants. */
- if ((ret = __os_calloc(env, (size_t)nlockers,
- sizeof(u_int32_t) * nalloc, &copymap)) != 0)
- goto err;
- memcpy(copymap, bitmap, nlockers * sizeof(u_int32_t) * nalloc);
-
- if ((ret = __os_calloc(env, sizeof(u_int32_t), nalloc, &tmpmap)) != 0)
- goto err;
-
- /* Find a deadlock. */
- if ((ret =
- __dd_find(env, bitmap, idmap, nlockers, nalloc, &deadlist)) != 0)
- return (ret);
-
- /*
- * We need the cur_maxid from the txn region as well. In order
- * to avoid tricky synchronization between the lock and txn
- * regions, we simply unlock the lock region and then lock the
- * txn region. This introduces a small window during which the
- * transaction system could then wrap. We're willing to return
- * the wrong answer for "oldest" or "youngest" in those rare
- * circumstances.
- */
- if (TXN_ON(env)) {
- TXN_SYSTEM_LOCK(env);
- txn_max = ((DB_TXNREGION *)
- env->tx_handle->reginfo.primary)->cur_maxid;
- TXN_SYSTEM_UNLOCK(env);
- } else
- txn_max = TXN_MAXIMUM;
-
- killid = BAD_KILLID;
- for (deadp = deadlist; *deadp != NULL; deadp++) {
- if (rejectp != NULL)
- ++*rejectp;
- killid = (u_int32_t)(*deadp - bitmap) / nalloc;
- limit = killid;
-
- /*
- * There are cases in which our general algorithm will
- * fail. Returning 1 from verify indicates that the
- * particular locker is not only involved in a deadlock,
- * but that killing him will allow others to make forward
- * progress. Unfortunately, there are cases where we need
- * to abort someone, but killing them will not necessarily
- * ensure forward progress (imagine N readers all trying to
- * acquire a write lock).
- * killid is only set to lockers that pass the db_verify test.
- * keeper will hold the best candidate even if it does
- * not pass db_verify. Once we fill in killid then we do
- * not need a keeper, but we keep updating it anyway.
- */
-
- keeper = idmap[killid].in_abort == 0 ? killid : BAD_KILLID;
- if (keeper == BAD_KILLID ||
- __dd_verify(idmap, *deadp,
- tmpmap, copymap, nlockers, nalloc, keeper) == 0)
- killid = BAD_KILLID;
-
- if (killid != BAD_KILLID &&
- (atype == DB_LOCK_DEFAULT || atype == DB_LOCK_RANDOM))
- goto dokill;
-
- /*
- * Start with the id that we know is deadlocked, then examine
- * all other set bits and see if any are a better candidate
- * for abortion and they are genuinely part of the deadlock.
- * The definition of "best":
- * MAXLOCKS: maximum count
- * MAXWRITE: maximum write count
- * MINLOCKS: minimum count
- * MINWRITE: minimum write count
- * OLDEST: smallest id
- * YOUNGEST: largest id
- */
- for (i = (limit + 1) % nlockers;
- i != limit;
- i = (i + 1) % nlockers) {
- if (!ISSET_MAP(*deadp, i) || idmap[i].in_abort)
- continue;
-
- /*
- * Determine if we have a verified candidate
- * in killid, if not then compare with the
- * non-verified candidate in keeper.
- */
- if (killid == BAD_KILLID) {
- if (keeper == BAD_KILLID)
- goto use_next;
- else
- cid = keeper;
- } else
- cid = killid;
-
- switch (atype) {
- case DB_LOCK_OLDEST:
- if (__dd_isolder(idmap[cid].id,
- idmap[i].id, lock_max, txn_max))
- continue;
- break;
- case DB_LOCK_YOUNGEST:
- if (__dd_isolder(idmap[i].id,
- idmap[cid].id, lock_max, txn_max))
- continue;
- break;
- case DB_LOCK_MAXLOCKS:
- if (idmap[i].count < idmap[cid].count)
- continue;
- break;
- case DB_LOCK_MAXWRITE:
- if (idmap[i].count < idmap[cid].count)
- continue;
- break;
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MINWRITE:
- if (idmap[i].count > idmap[cid].count)
- continue;
- break;
- case DB_LOCK_DEFAULT:
- case DB_LOCK_RANDOM:
- goto dokill;
-
- default:
- killid = BAD_KILLID;
- ret = EINVAL;
- goto dokill;
- }
-
-use_next: keeper = i;
- if (__dd_verify(idmap, *deadp,
- tmpmap, copymap, nlockers, nalloc, i))
- killid = i;
- }
-
-dokill: if (killid == BAD_KILLID) {
- if (keeper == BAD_KILLID)
- continue;
- else {
- /*
- * Removing a single locker will not
- * break the deadlock, signal to run
- * detection again.
- */
- region->need_dd = 1;
- killid = keeper;
- }
- }
-
- /* Kill the locker with lockid idmap[killid]. */
- if ((ret = __dd_abort(env, &idmap[killid], &status)) != 0)
- break;
-
- /*
- * It's possible that the lock was already aborted; this isn't
- * necessarily a problem, so do not treat it as an error. If
- * the txn was aborting and deadlocked trying to upgrade
- * a was_write lock, the detector should be run again or
- * the deadlock might persist.
- */
- if (status != 0) {
- if (status != DB_ALREADY_ABORTED)
- __db_errx(env,
- "warning: unable to abort locker %lx",
- (u_long)idmap[killid].id);
- else
- region->need_dd = 1;
- } else if (FLD_ISSET(env->dbenv->verbose, DB_VERB_DEADLOCK))
- __db_msg(env,
- "Aborting locker %lx", (u_long)idmap[killid].id);
- }
-err: if(copymap != NULL)
- __os_free(env, copymap);
- if (deadlist != NULL)
- __os_free(env, deadlist);
- if(tmpmap != NULL)
- __os_free(env, tmpmap);
- __os_free(env, bitmap);
- __os_free(env, idmap);
-
- return (ret);
-}
-
-/*
- * ========================================================================
- * Utilities
- */
-
-#define DD_INVALID_ID ((u_int32_t) -1)
-
-/*
- * __dd_build --
- * Build the lock dependency bit maps.
- * Notes on syncronization:
- * LOCK_SYSTEM_LOCK is used to hold objects locked when we have
- * a single partition.
- * LOCK_LOCKERS is held while we are walking the lockers list and
- * to single thread the use of lockerp->dd_id.
- * LOCK_DD protects the DD list of objects.
- */
-
-static int
-__dd_build(env, atype, bmp, nlockers, allocp, idmap, rejectp)
- ENV *env;
- u_int32_t atype, **bmp, *nlockers, *allocp;
- locker_info **idmap;
- int *rejectp;
-{
- struct __db_lock *lp;
- DB_LOCKER *lip, *lockerp, *child;
- DB_LOCKOBJ *op, *lo, *np;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- locker_info *id_array;
- db_timespec now, min_timeout;
- u_int32_t *bitmap, count, dd;
- u_int32_t *entryp, gen, id, indx, ndx, nentries, *tmpmap;
- u_int8_t *pptr;
- int is_first, ret;
-
- COMPQUIET(indx, 0);
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- timespecclear(&now);
- timespecclear(&min_timeout);
-
- /*
- * While we always check for expired timeouts, if we are called with
- * DB_LOCK_EXPIRE, then we are only checking for timeouts (i.e., not
- * doing deadlock detection at all). If we aren't doing real deadlock
- * detection, then we can skip a significant, amount of the processing.
- * In particular we do not build the conflict array and our caller
- * needs to expect this.
- */
- LOCK_SYSTEM_LOCK(lt, region);
- if (atype == DB_LOCK_EXPIRE) {
-skip: LOCK_DD(env, region);
- op = SH_TAILQ_FIRST(&region->dd_objs, __db_lockobj);
- for (; op != NULL; op = np) {
- indx = op->indx;
- gen = op->generation;
- UNLOCK_DD(env, region);
- OBJECT_LOCK_NDX(lt, region, indx);
- if (op->generation != gen) {
- OBJECT_UNLOCK(lt, region, indx);
- goto skip;
- }
- SH_TAILQ_FOREACH(lp, &op->waiters, links, __db_lock) {
- lockerp = (DB_LOCKER *)
- R_ADDR(&lt->reginfo, lp->holder);
- if (lp->status == DB_LSTAT_WAITING) {
- if (__lock_expired(env,
- &now, &lockerp->lk_expire)) {
- lp->status = DB_LSTAT_EXPIRED;
- MUTEX_UNLOCK(
- env, lp->mtx_lock);
- if (rejectp != NULL)
- ++*rejectp;
- continue;
- }
- if (timespecisset(
- &lockerp->lk_expire) &&
- (!timespecisset(&min_timeout) ||
- timespeccmp(&min_timeout,
- &lockerp->lk_expire, >)))
- min_timeout =
- lockerp->lk_expire;
- }
- }
- LOCK_DD(env, region);
- np = SH_TAILQ_NEXT(op, dd_links, __db_lockobj);
- OBJECT_UNLOCK(lt, region, indx);
- }
- UNLOCK_DD(env, region);
- LOCK_SYSTEM_UNLOCK(lt, region);
- goto done;
- }
-
- /*
- * Allocate after locking the region
- * to make sure the structures are large enough.
- */
- LOCK_LOCKERS(env, region);
- count = region->nlockers;
- if (count == 0) {
- UNLOCK_LOCKERS(env, region);
- *nlockers = 0;
- return (0);
- }
-
- if (FLD_ISSET(env->dbenv->verbose, DB_VERB_DEADLOCK))
- __db_msg(env, "%lu lockers", (u_long)count);
-
- nentries = (u_int32_t)DB_ALIGN(count, 32) / 32;
-
- /* Allocate enough space for a count by count bitmap matrix. */
- if ((ret = __os_calloc(env, (size_t)count,
- sizeof(u_int32_t) * nentries, &bitmap)) != 0) {
- UNLOCK_LOCKERS(env, region);
- return (ret);
- }
-
- if ((ret = __os_calloc(env,
- sizeof(u_int32_t), nentries, &tmpmap)) != 0) {
- UNLOCK_LOCKERS(env, region);
- __os_free(env, bitmap);
- return (ret);
- }
-
- if ((ret = __os_calloc(env,
- (size_t)count, sizeof(locker_info), &id_array)) != 0) {
- UNLOCK_LOCKERS(env, region);
- __os_free(env, bitmap);
- __os_free(env, tmpmap);
- return (ret);
- }
-
- /*
- * First we go through and assign each locker a deadlock detector id.
- */
- id = 0;
- SH_TAILQ_FOREACH(lip, &region->lockers, ulinks, __db_locker) {
- if (lip->master_locker == INVALID_ROFF) {
- DB_ASSERT(env, id < count);
- lip->dd_id = id++;
- id_array[lip->dd_id].id = lip->id;
- switch (atype) {
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MAXLOCKS:
- id_array[lip->dd_id].count = lip->nlocks;
- break;
- case DB_LOCK_MINWRITE:
- case DB_LOCK_MAXWRITE:
- id_array[lip->dd_id].count = lip->nwrites;
- break;
- default:
- break;
- }
- } else
- lip->dd_id = DD_INVALID_ID;
-
- }
-
- /*
- * We only need consider objects that have waiters, so we use
- * the list of objects with waiters (dd_objs) instead of traversing
- * the entire hash table. For each object, we traverse the waiters
- * list and add an entry in the waitsfor matrix for each waiter/holder
- * combination. We don't want to lock from the DD mutex to the
- * hash mutex, so we drop deadlock mutex and get the hash mutex. Then
- * check to see if the object has changed. Once we have the object
- * locked then locks cannot be remove and lockers cannot go away.
- */
- if (0) {
- /* If an object has changed state, start over. */
-again: memset(bitmap, 0, count * sizeof(u_int32_t) * nentries);
- }
- LOCK_DD(env, region);
- op = SH_TAILQ_FIRST(&region->dd_objs, __db_lockobj);
- for (; op != NULL; op = np) {
- indx = op->indx;
- gen = op->generation;
- UNLOCK_DD(env, region);
-
- OBJECT_LOCK_NDX(lt, region, indx);
- if (gen != op->generation) {
- OBJECT_UNLOCK(lt, region, indx);
- goto again;
- }
-
- /*
- * First we go through and create a bit map that
- * represents all the holders of this object.
- */
-
- CLEAR_MAP(tmpmap, nentries);
- SH_TAILQ_FOREACH(lp, &op->holders, links, __db_lock) {
- lockerp = (DB_LOCKER *)R_ADDR(&lt->reginfo, lp->holder);
-
- if (lockerp->dd_id == DD_INVALID_ID) {
- /*
- * If the locker was not here when we started,
- * then it was not deadlocked at that time.
- */
- if (lockerp->master_locker == INVALID_ROFF)
- continue;
- dd = ((DB_LOCKER *)R_ADDR(&lt->reginfo,
- lockerp->master_locker))->dd_id;
- if (dd == DD_INVALID_ID)
- continue;
- lockerp->dd_id = dd;
- switch (atype) {
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MAXLOCKS:
- id_array[dd].count += lockerp->nlocks;
- break;
- case DB_LOCK_MINWRITE:
- case DB_LOCK_MAXWRITE:
- id_array[dd].count += lockerp->nwrites;
- break;
- default:
- break;
- }
-
- } else
- dd = lockerp->dd_id;
- id_array[dd].valid = 1;
-
- /*
- * If the holder has already been aborted, then
- * we should ignore it for now.
- */
- if (lp->status == DB_LSTAT_HELD)
- SET_MAP(tmpmap, dd);
- }
-
- /*
- * Next, for each waiter, we set its row in the matrix
- * equal to the map of holders we set up above.
- */
- for (is_first = 1,
- lp = SH_TAILQ_FIRST(&op->waiters, __db_lock);
- lp != NULL;
- is_first = 0,
- lp = SH_TAILQ_NEXT(lp, links, __db_lock)) {
- lockerp = (DB_LOCKER *)R_ADDR(&lt->reginfo, lp->holder);
- if (lp->status == DB_LSTAT_WAITING) {
- if (__lock_expired(env,
- &now, &lockerp->lk_expire)) {
- lp->status = DB_LSTAT_EXPIRED;
- MUTEX_UNLOCK(env, lp->mtx_lock);
- if (rejectp != NULL)
- ++*rejectp;
- continue;
- }
- if (timespecisset(&lockerp->lk_expire) &&
- (!timespecisset(&min_timeout) ||
- timespeccmp(
- &min_timeout, &lockerp->lk_expire, >)))
- min_timeout = lockerp->lk_expire;
- }
-
- if (lockerp->dd_id == DD_INVALID_ID) {
- dd = ((DB_LOCKER *)R_ADDR(&lt->reginfo,
- lockerp->master_locker))->dd_id;
- lockerp->dd_id = dd;
- switch (atype) {
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MAXLOCKS:
- id_array[dd].count += lockerp->nlocks;
- break;
- case DB_LOCK_MINWRITE:
- case DB_LOCK_MAXWRITE:
- id_array[dd].count += lockerp->nwrites;
- break;
- default:
- break;
- }
- } else
- dd = lockerp->dd_id;
- id_array[dd].valid = 1;
-
- /*
- * If the transaction is pending abortion, then
- * ignore it on this iteration.
- */
- if (lp->status != DB_LSTAT_WAITING)
- continue;
-
- entryp = bitmap + (nentries * dd);
- OR_MAP(entryp, tmpmap, nentries);
- /*
- * If this is the first waiter on the queue,
- * then we remove the waitsfor relationship
- * with oneself. However, if it's anywhere
- * else on the queue, then we have to keep
- * it and we have an automatic deadlock.
- */
- if (is_first) {
- if (ISSET_MAP(entryp, dd))
- id_array[dd].self_wait = 1;
- CLR_MAP(entryp, dd);
- }
- }
- LOCK_DD(env, region);
- np = SH_TAILQ_NEXT(op, dd_links, __db_lockobj);
- OBJECT_UNLOCK(lt, region, indx);
- }
- UNLOCK_DD(env, region);
-
- /*
- * Now for each locker, record its last lock and set abort status.
- * We need to look at the heldby list carefully. We have the LOCKERS
- * locked so they cannot go away. The lock at the head of the
- * list can be removed by locking the object it points at.
- * Since lock memory is not freed if we get a lock we can look
- * at it safely but SH_LIST_FIRST is not atomic, so we check that
- * the list has not gone empty during that macro. We check abort
- * status after building the bit maps so that we will not detect
- * a blocked transaction without noting that it is already aborting.
- */
- for (id = 0; id < count; id++) {
- if (!id_array[id].valid)
- continue;
- if ((ret = __lock_getlocker_int(lt,
- id_array[id].id, 0, &lockerp)) != 0 || lockerp == NULL)
- continue;
-
- /*
- * If this is a master transaction, try to
- * find one of its children's locks first,
- * as they are probably more recent.
- */
- child = SH_LIST_FIRST(&lockerp->child_locker, __db_locker);
- if (child != NULL) {
- do {
-c_retry: lp = SH_LIST_FIRST(&child->heldby, __db_lock);
- if (SH_LIST_EMPTY(&child->heldby) || lp == NULL)
- goto c_next;
-
- if (F_ISSET(child, DB_LOCKER_INABORT))
- id_array[id].in_abort = 1;
- ndx = lp->indx;
- OBJECT_LOCK_NDX(lt, region, ndx);
- if (lp != SH_LIST_FIRST(
- &child->heldby, __db_lock) ||
- ndx != lp->indx) {
- OBJECT_UNLOCK(lt, region, ndx);
- goto c_retry;
- }
-
- if (lp != NULL &&
- lp->status == DB_LSTAT_WAITING) {
- id_array[id].last_locker_id = child->id;
- goto get_lock;
- } else {
- OBJECT_UNLOCK(lt, region, ndx);
- }
-c_next: child = SH_LIST_NEXT(
- child, child_link, __db_locker);
- } while (child != NULL);
- }
-
-l_retry: lp = SH_LIST_FIRST(&lockerp->heldby, __db_lock);
- if (!SH_LIST_EMPTY(&lockerp->heldby) && lp != NULL) {
- ndx = lp->indx;
- OBJECT_LOCK_NDX(lt, region, ndx);
- if (lp != SH_LIST_FIRST(&lockerp->heldby, __db_lock) ||
- lp->indx != ndx) {
- OBJECT_UNLOCK(lt, region, ndx);
- goto l_retry;
- }
- id_array[id].last_locker_id = lockerp->id;
-get_lock: id_array[id].last_lock = R_OFFSET(&lt->reginfo, lp);
- id_array[id].last_obj = lp->obj;
- lo = (DB_LOCKOBJ *)((u_int8_t *)lp + lp->obj);
- id_array[id].last_ndx = lo->indx;
- pptr = SH_DBT_PTR(&lo->lockobj);
- if (lo->lockobj.size >= sizeof(db_pgno_t))
- memcpy(&id_array[id].pgno,
- pptr, sizeof(db_pgno_t));
- else
- id_array[id].pgno = 0;
- OBJECT_UNLOCK(lt, region, ndx);
- }
- if (F_ISSET(lockerp, DB_LOCKER_INABORT))
- id_array[id].in_abort = 1;
- }
- UNLOCK_LOCKERS(env, region);
- LOCK_SYSTEM_UNLOCK(lt, region);
-
- /*
- * Now we can release everything except the bitmap matrix that we
- * created.
- */
- *nlockers = id;
- *idmap = id_array;
- *bmp = bitmap;
- *allocp = nentries;
- __os_free(env, tmpmap);
-done: if (timespecisset(&region->next_timeout))
- region->next_timeout = min_timeout;
- return (0);
-}
-
-static int
-__dd_find(env, bmp, idmap, nlockers, nalloc, deadp)
- ENV *env;
- u_int32_t *bmp, nlockers, nalloc;
- locker_info *idmap;
- u_int32_t ***deadp;
-{
- u_int32_t i, j, k, *mymap, *tmpmap, **retp;
- u_int ndead, ndeadalloc;
- int ret;
-
-#undef INITIAL_DEAD_ALLOC
-#define INITIAL_DEAD_ALLOC 8
-
- ndeadalloc = INITIAL_DEAD_ALLOC;
- ndead = 0;
- if ((ret = __os_malloc(env,
- ndeadalloc * sizeof(u_int32_t *), &retp)) != 0)
- return (ret);
-
- /*
- * For each locker, OR in the bits from the lockers on which that
- * locker is waiting.
- */
- for (mymap = bmp, i = 0; i < nlockers; i++, mymap += nalloc) {
- if (!idmap[i].valid)
- continue;
- for (j = 0; j < nlockers; j++) {
- if (!ISSET_MAP(mymap, j))
- continue;
-
- /* Find the map for this bit. */
- tmpmap = bmp + (nalloc * j);
- OR_MAP(mymap, tmpmap, nalloc);
- if (!ISSET_MAP(mymap, i))
- continue;
-
- /* Make sure we leave room for NULL. */
- if (ndead + 2 >= ndeadalloc) {
- ndeadalloc <<= 1;
- /*
- * If the alloc fails, then simply return the
- * deadlocks that we already have.
- */
- if (__os_realloc(env,
- ndeadalloc * sizeof(u_int32_t *),
- &retp) != 0) {
- retp[ndead] = NULL;
- *deadp = retp;
- return (0);
- }
- }
- retp[ndead++] = mymap;
-
- /* Mark all participants in this deadlock invalid. */
- for (k = 0; k < nlockers; k++)
- if (ISSET_MAP(mymap, k))
- idmap[k].valid = 0;
- break;
- }
- }
- retp[ndead] = NULL;
- *deadp = retp;
- return (0);
-}
-
-static int
-__dd_abort(env, info, statusp)
- ENV *env;
- locker_info *info;
- int *statusp;
-{
- struct __db_lock *lockp;
- DB_LOCKER *lockerp;
- DB_LOCKOBJ *sh_obj;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- int ret;
-
- *statusp = 0;
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- ret = 0;
-
- /* We must lock so this locker cannot go away while we abort it. */
- LOCK_SYSTEM_LOCK(lt, region);
- LOCK_LOCKERS(env, region);
-
- /*
- * Get the locker. If it's gone or was aborted while we were
- * detecting, return that.
- */
- if ((ret = __lock_getlocker_int(lt,
- info->last_locker_id, 0, &lockerp)) != 0)
- goto err;
- if (lockerp == NULL || F_ISSET(lockerp, DB_LOCKER_INABORT)) {
- *statusp = DB_ALREADY_ABORTED;
- goto out;
- }
-
- /*
- * Find the locker's last lock. It is possible for this lock to have
- * been freed, either though a timeout or another detector run.
- * First lock the lock object so it is stable.
- */
-
- OBJECT_LOCK_NDX(lt, region, info->last_ndx);
- if ((lockp = SH_LIST_FIRST(&lockerp->heldby, __db_lock)) == NULL) {
- *statusp = DB_ALREADY_ABORTED;
- goto done;
- }
- if (R_OFFSET(&lt->reginfo, lockp) != info->last_lock ||
- lockp->holder != R_OFFSET(&lt->reginfo, lockerp) ||
- F_ISSET(lockerp, DB_LOCKER_INABORT) ||
- lockp->obj != info->last_obj || lockp->status != DB_LSTAT_WAITING) {
- *statusp = DB_ALREADY_ABORTED;
- goto done;
- }
-
- sh_obj = (DB_LOCKOBJ *)((u_int8_t *)lockp + lockp->obj);
-
- /* Abort lock, take it off list, and wake up this lock. */
- lockp->status = DB_LSTAT_ABORTED;
- SH_TAILQ_REMOVE(&sh_obj->waiters, lockp, links, __db_lock);
-
- /*
- * Either the waiters list is now empty, in which case we remove
- * it from dd_objs, or it is not empty, in which case we need to
- * do promotion.
- */
- if (SH_TAILQ_FIRST(&sh_obj->waiters, __db_lock) == NULL) {
- LOCK_DD(env, region);
- SH_TAILQ_REMOVE(&region->dd_objs,
- sh_obj, dd_links, __db_lockobj);
- UNLOCK_DD(env, region);
- } else
- ret = __lock_promote(lt, sh_obj, NULL, 0);
- MUTEX_UNLOCK(env, lockp->mtx_lock);
-
- STAT(region->stat.st_ndeadlocks++);
-done: OBJECT_UNLOCK(lt, region, info->last_ndx);
-err:
-out: UNLOCK_LOCKERS(env, region);
- LOCK_SYSTEM_UNLOCK(lt, region);
- return (ret);
-}
-
-#ifdef DIAGNOSTIC
-static void
-__dd_debug(env, idmap, bitmap, nlockers, nalloc)
- ENV *env;
- locker_info *idmap;
- u_int32_t *bitmap, nlockers, nalloc;
-{
- DB_MSGBUF mb;
- u_int32_t i, j, *mymap;
-
- DB_MSGBUF_INIT(&mb);
-
- __db_msg(env, "Waitsfor array\nWaiter:\tWaiting on:");
- for (mymap = bitmap, i = 0; i < nlockers; i++, mymap += nalloc) {
- if (!idmap[i].valid)
- continue;
-
- __db_msgadd(env, &mb, /* Waiter. */
- "%lx/%lu:\t", (u_long)idmap[i].id, (u_long)idmap[i].pgno);
- for (j = 0; j < nlockers; j++)
- if (ISSET_MAP(mymap, j))
- __db_msgadd(env,
- &mb, " %lx", (u_long)idmap[j].id);
- __db_msgadd(env, &mb, " %lu", (u_long)idmap[i].last_lock);
- DB_MSGBUF_FLUSH(env, &mb);
- }
-}
-#endif
-
-/*
- * Given a bitmap that contains a deadlock, verify that the bit
- * specified in the which parameter indicates a transaction that
- * is actually deadlocked. Return 1 if really deadlocked, 0 otherwise.
- * deadmap -- the array that identified the deadlock.
- * tmpmap -- a copy of the initial bitmaps from the dd_build phase.
- * origmap -- a temporary bit map into which we can OR things.
- * nlockers -- the number of actual lockers under consideration.
- * nalloc -- the number of words allocated for the bitmap.
- * which -- the locker in question.
- */
-static int
-__dd_verify(idmap, deadmap, tmpmap, origmap, nlockers, nalloc, which)
- locker_info *idmap;
- u_int32_t *deadmap, *tmpmap, *origmap;
- u_int32_t nlockers, nalloc, which;
-{
- u_int32_t *tmap;
- u_int32_t j;
- int count;
-
- memset(tmpmap, 0, sizeof(u_int32_t) * nalloc);
-
- /*
- * In order for "which" to be actively involved in
- * the deadlock, removing him from the evaluation
- * must remove the deadlock. So, we OR together everyone
- * except which; if all the participants still have their
- * bits set, then the deadlock persists and which does
- * not participate. If the deadlock does not persist
- * then "which" does participate.
- */
- count = 0;
- for (j = 0; j < nlockers; j++) {
- if (!ISSET_MAP(deadmap, j) || j == which)
- continue;
-
- /* Find the map for this bit. */
- tmap = origmap + (nalloc * j);
-
- /*
- * We special case the first waiter who is also a holder, so
- * we don't automatically call that a deadlock. However, if
- * it really is a deadlock, we need the bit set now so that
- * we treat the first waiter like other waiters.
- */
- if (idmap[j].self_wait)
- SET_MAP(tmap, j);
- OR_MAP(tmpmap, tmap, nalloc);
- count++;
- }
-
- if (count == 1)
- return (1);
-
- /*
- * Now check the resulting map and see whether
- * all participants still have their bit set.
- */
- for (j = 0; j < nlockers; j++) {
- if (!ISSET_MAP(deadmap, j) || j == which)
- continue;
- if (!ISSET_MAP(tmpmap, j))
- return (1);
- }
- return (0);
-}
-
-/*
- * __dd_isolder --
- *
- * Figure out the relative age of two lockers. We make all lockers
- * older than all transactions, because that's how it's worked
- * historically (because lockers are lower ids).
- */
-static int
-__dd_isolder(a, b, lock_max, txn_max)
- u_int32_t a, b;
- u_int32_t lock_max, txn_max;
-{
- u_int32_t max;
-
- /* Check for comparing lock-id and txnid. */
- if (a <= DB_LOCK_MAXID && b > DB_LOCK_MAXID)
- return (1);
- if (b <= DB_LOCK_MAXID && a > DB_LOCK_MAXID)
- return (0);
-
- /* In the same space; figure out which one. */
- max = txn_max;
- if (a <= DB_LOCK_MAXID)
- max = lock_max;
-
- /*
- * We can't get a 100% correct ordering, because we don't know
- * where the current interval started and if there were older
- * lockers outside the interval. We do the best we can.
- */
-
- /*
- * Check for a wrapped case with ids above max.
- */
- if (a > max && b < max)
- return (1);
- if (b > max && a < max)
- return (0);
-
- return (a < b);
-}
diff --git a/lock/lock_failchk.c b/lock/lock_failchk.c
deleted file mode 100644
index 75f85af..0000000
--- a/lock/lock_failchk.c
+++ /dev/null
@@ -1,111 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 2005-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-#include "dbinc/txn.h"
-
-/*
- * __lock_failchk --
- * Check for locks held by dead threads of control and release
- * read locks. If any write locks were held by dead non-trasnactional
- * lockers then we must abort and run recovery. Otherwise we release
- * read locks for lockers owned by dead threads. Write locks for
- * dead transactional lockers will be freed when we abort the transaction.
- *
- * PUBLIC: int __lock_failchk __P((ENV *));
- */
-int
-__lock_failchk(env)
- ENV *env;
-{
- DB_ENV *dbenv;
- DB_LOCKER *lip;
- DB_LOCKREGION *lrp;
- DB_LOCKREQ request;
- DB_LOCKTAB *lt;
- u_int32_t i;
- int ret;
- char buf[DB_THREADID_STRLEN];
-
- dbenv = env->dbenv;
- lt = env->lk_handle;
- lrp = lt->reginfo.primary;
-
-retry: LOCK_LOCKERS(env, lrp);
-
- ret = 0;
- for (i = 0; i < lrp->locker_t_size; i++)
- SH_TAILQ_FOREACH(lip, &lt->locker_tab[i], links, __db_locker) {
- /*
- * If the locker is transactional, we can ignore it if
- * it has no read locks or has no locks at all. Check
- * the heldby list rather then nlocks since a lock may
- * be PENDING. __txn_failchk aborts any transactional
- * lockers. Non-transactional lockers progress to
- * is_alive test.
- */
- if ((lip->id >= TXN_MINIMUM) &&
- (SH_LIST_EMPTY(&lip->heldby) ||
- lip->nlocks == lip->nwrites))
- continue;
-
- /* If the locker is still alive, it's not a problem. */
- if (dbenv->is_alive(dbenv, lip->pid, lip->tid, 0))
- continue;
-
- /*
- * We can only deal with read locks. If a
- * non-transactional locker holds write locks we
- * have to assume a Berkeley DB operation was
- * interrupted with only 1-of-N pages modified.
- */
- if (lip->id < TXN_MINIMUM && lip->nwrites != 0) {
- ret = __db_failed(env,
- "locker has write locks",
- lip->pid, lip->tid);
- break;
- }
-
- /*
- * Discard the locker and its read locks.
- */
- if (!SH_LIST_EMPTY(&lip->heldby)) {
- __db_msg(env,
- "Freeing read locks for locker %#lx: %s",
- (u_long)lip->id, dbenv->thread_id_string(
- dbenv, lip->pid, lip->tid, buf));
- UNLOCK_LOCKERS(env, lrp);
- memset(&request, 0, sizeof(request));
- request.op = DB_LOCK_PUT_READ;
- if ((ret = __lock_vec(env,
- lip, 0, &request, 1, NULL)) != 0)
- return (ret);
- }
- else
- UNLOCK_LOCKERS(env, lrp);
-
- /*
- * This locker is most likely referenced by a cursor
- * which is owned by a dead thread. Normally the
- * cursor would be available for other threads
- * but we assume the dead thread will never release
- * it.
- */
- if (lip->id < TXN_MINIMUM &&
- (ret = __lock_freefamilylocker(lt, lip)) != 0)
- return (ret);
- goto retry;
- }
-
- UNLOCK_LOCKERS(env, lrp);
-
- return (ret);
-}
diff --git a/lock/lock_id.c b/lock/lock_id.c
deleted file mode 100644
index d020e3c..0000000
--- a/lock/lock_id.c
+++ /dev/null
@@ -1,460 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-#include "dbinc/log.h"
-
-/*
- * __lock_id_pp --
- * ENV->lock_id pre/post processing.
- *
- * PUBLIC: int __lock_id_pp __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_id_pp(dbenv, idp)
- DB_ENV *dbenv;
- u_int32_t *idp;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_id", DB_INIT_LOCK);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env, (__lock_id(env, idp, NULL)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_id --
- * ENV->lock_id.
- *
- * PUBLIC: int __lock_id __P((ENV *, u_int32_t *, DB_LOCKER **));
- */
-int
-__lock_id(env, idp, lkp)
- ENV *env;
- u_int32_t *idp;
- DB_LOCKER **lkp;
-{
- DB_LOCKER *lk;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- u_int32_t id, *ids;
- int nids, ret;
-
- lk = NULL;
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- id = DB_LOCK_INVALIDID;
- ret = 0;
-
- id = DB_LOCK_INVALIDID;
- lk = NULL;
-
- LOCK_LOCKERS(env, region);
-
- /*
- * Allocate a new lock id. If we wrap around then we find the minimum
- * currently in use and make sure we can stay below that. This code is
- * similar to code in __txn_begin_int for recovering txn ids.
- *
- * Our current valid range can span the maximum valid value, so check
- * for it and wrap manually.
- */
- if (region->lock_id == DB_LOCK_MAXID &&
- region->cur_maxid != DB_LOCK_MAXID)
- region->lock_id = DB_LOCK_INVALIDID;
- if (region->lock_id == region->cur_maxid) {
- if ((ret = __os_malloc(env,
- sizeof(u_int32_t) * region->nlockers, &ids)) != 0)
- goto err;
- nids = 0;
- SH_TAILQ_FOREACH(lk, &region->lockers, ulinks, __db_locker)
- ids[nids++] = lk->id;
- region->lock_id = DB_LOCK_INVALIDID;
- region->cur_maxid = DB_LOCK_MAXID;
- if (nids != 0)
- __db_idspace(ids, nids,
- &region->lock_id, &region->cur_maxid);
- __os_free(env, ids);
- }
- id = ++region->lock_id;
-
- /* Allocate a locker for this id. */
- ret = __lock_getlocker_int(lt, id, 1, &lk);
-
-err: UNLOCK_LOCKERS(env, region);
-
- if (idp != NULL)
- *idp = id;
- if (lkp != NULL)
- *lkp = lk;
-
- return (ret);
-}
-
-/*
- * __lock_set_thread_id --
- * Set the thread_id in an existing locker.
- * PUBLIC: void __lock_set_thread_id __P((void *, pid_t, db_threadid_t));
- */
-void
-__lock_set_thread_id(lref_arg, pid, tid)
- void *lref_arg;
- pid_t pid;
- db_threadid_t tid;
-{
- DB_LOCKER *lref;
-
- lref = lref_arg;
- lref->pid = pid;
- lref->tid = tid;
-}
-
-/*
- * __lock_id_free_pp --
- * ENV->lock_id_free pre/post processing.
- *
- * PUBLIC: int __lock_id_free_pp __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_id_free_pp(dbenv, id)
- DB_ENV *dbenv;
- u_int32_t id;
-{
- DB_LOCKER *sh_locker;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_THREAD_INFO *ip;
- ENV *env;
- int handle_check, ret, t_ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_id_free", DB_INIT_LOCK);
-
- ENV_ENTER(env, ip);
-
- /* Check for replication block. */
- handle_check = IS_ENV_REPLICATED(env);
- if (handle_check && (ret = __env_rep_enter(env, 0)) != 0) {
- handle_check = 0;
- goto err;
- }
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
-
- LOCK_LOCKERS(env, region);
- if ((ret =
- __lock_getlocker_int(env->lk_handle, id, 0, &sh_locker)) == 0) {
- if (sh_locker != NULL)
- ret = __lock_freelocker(lt, region, sh_locker);
- else {
- __db_errx(env, "Unknown locker id: %lx", (u_long)id);
- ret = EINVAL;
- }
- }
- UNLOCK_LOCKERS(env, region);
-
- if (handle_check && (t_ret = __env_db_rep_exit(env)) != 0 && ret == 0)
- ret = t_ret;
-
-err: ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_id_free --
- * Free a locker id.
- *
- * PUBLIC: int __lock_id_free __P((ENV *, DB_LOCKER *));
- */
-int
-__lock_id_free(env, sh_locker)
- ENV *env;
- DB_LOCKER *sh_locker;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- int ret;
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- ret = 0;
-
- if (sh_locker->nlocks != 0) {
- __db_errx(env, "Locker still has locks");
- ret = EINVAL;
- goto err;
- }
-
- LOCK_LOCKERS(env, region);
- ret = __lock_freelocker(lt, region, sh_locker);
- UNLOCK_LOCKERS(env, region);
-
-err:
- return (ret);
-}
-
-/*
- * __lock_id_set --
- * Set the current locker ID and current maximum unused ID (for
- * testing purposes only).
- *
- * PUBLIC: int __lock_id_set __P((ENV *, u_int32_t, u_int32_t));
- */
-int
-__lock_id_set(env, cur_id, max_id)
- ENV *env;
- u_int32_t cur_id, max_id;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "lock_id_set", DB_INIT_LOCK);
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- region->lock_id = cur_id;
- region->cur_maxid = max_id;
-
- return (0);
-}
-
-/*
- * __lock_getlocker --
- * Get a locker in the locker hash table. The create parameter
- * indicates if the locker should be created if it doesn't exist in
- * the table.
- *
- * This must be called with the locker mutex lock if create == 1.
- *
- * PUBLIC: int __lock_getlocker __P((DB_LOCKTAB *,
- * PUBLIC: u_int32_t, int, DB_LOCKER **));
- * PUBLIC: int __lock_getlocker_int __P((DB_LOCKTAB *,
- * PUBLIC: u_int32_t, int, DB_LOCKER **));
- */
-int
-__lock_getlocker(lt, locker, create, retp)
- DB_LOCKTAB *lt;
- u_int32_t locker;
- int create;
- DB_LOCKER **retp;
-{
- DB_LOCKREGION *region;
- ENV *env;
- int ret;
-
- COMPQUIET(region, NULL);
- env = lt->env;
- region = lt->reginfo.primary;
-
- LOCK_LOCKERS(env, region);
- ret = __lock_getlocker_int(lt, locker, create, retp);
- UNLOCK_LOCKERS(env, region);
-
- return (ret);
-}
-
-int
-__lock_getlocker_int(lt, locker, create, retp)
- DB_LOCKTAB *lt;
- u_int32_t locker;
- int create;
- DB_LOCKER **retp;
-{
- DB_LOCKER *sh_locker;
- DB_LOCKREGION *region;
- ENV *env;
- u_int32_t indx;
-
- env = lt->env;
- region = lt->reginfo.primary;
-
- LOCKER_HASH(lt, region, locker, indx);
-
- /*
- * If we find the locker, then we can just return it. If we don't find
- * the locker, then we need to create it.
- */
- SH_TAILQ_FOREACH(sh_locker, &lt->locker_tab[indx], links, __db_locker)
- if (sh_locker->id == locker)
- break;
- if (sh_locker == NULL && create) {
- /* Create new locker and then insert it into hash table. */
- if ((sh_locker = SH_TAILQ_FIRST(
- &region->free_lockers, __db_locker)) == NULL)
- return (__lock_nomem(env, "locker entries"));
- SH_TAILQ_REMOVE(
- &region->free_lockers, sh_locker, links, __db_locker);
- ++region->nlockers;
-#ifdef HAVE_STATISTICS
- if (region->nlockers > region->stat.st_maxnlockers)
- region->stat.st_maxnlockers = region->nlockers;
-#endif
- sh_locker->id = locker;
- env->dbenv->thread_id(
- env->dbenv, &sh_locker->pid, &sh_locker->tid);
- sh_locker->dd_id = 0;
- sh_locker->master_locker = INVALID_ROFF;
- sh_locker->parent_locker = INVALID_ROFF;
- SH_LIST_INIT(&sh_locker->child_locker);
- sh_locker->flags = 0;
- SH_LIST_INIT(&sh_locker->heldby);
- sh_locker->nlocks = 0;
- sh_locker->nwrites = 0;
- sh_locker->lk_timeout = 0;
- timespecclear(&sh_locker->tx_expire);
- timespecclear(&sh_locker->lk_expire);
-
- SH_TAILQ_INSERT_HEAD(
- &lt->locker_tab[indx], sh_locker, links, __db_locker);
- SH_TAILQ_INSERT_HEAD(&region->lockers,
- sh_locker, ulinks, __db_locker);
- }
-
- *retp = sh_locker;
- return (0);
-}
-
-/*
- * __lock_addfamilylocker
- * Put a locker entry in for a child transaction.
- *
- * PUBLIC: int __lock_addfamilylocker __P((ENV *, u_int32_t, u_int32_t));
- */
-int
-__lock_addfamilylocker(env, pid, id)
- ENV *env;
- u_int32_t pid, id;
-{
- DB_LOCKER *lockerp, *mlockerp;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- int ret;
-
- COMPQUIET(region, NULL);
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- LOCK_LOCKERS(env, region);
-
- /* get/create the parent locker info */
- if ((ret = __lock_getlocker_int(lt, pid, 1, &mlockerp)) != 0)
- goto err;
-
- /*
- * We assume that only one thread can manipulate
- * a single transaction family.
- * Therefore the master locker cannot go away while
- * we manipulate it, nor can another child in the
- * family be created at the same time.
- */
- if ((ret = __lock_getlocker_int(lt, id, 1, &lockerp)) != 0)
- goto err;
-
- /* Point to our parent. */
- lockerp->parent_locker = R_OFFSET(&lt->reginfo, mlockerp);
-
- /* See if this locker is the family master. */
- if (mlockerp->master_locker == INVALID_ROFF)
- lockerp->master_locker = R_OFFSET(&lt->reginfo, mlockerp);
- else {
- lockerp->master_locker = mlockerp->master_locker;
- mlockerp = R_ADDR(&lt->reginfo, mlockerp->master_locker);
- }
-
- /*
- * Link the child at the head of the master's list.
- * The guess is when looking for deadlock that
- * the most recent child is the one thats blocked.
- */
- SH_LIST_INSERT_HEAD(
- &mlockerp->child_locker, lockerp, child_link, __db_locker);
-
-err: UNLOCK_LOCKERS(env, region);
-
- return (ret);
-}
-
-/*
- * __lock_freefamilylocker
- * Remove a locker from the hash table and its family.
- *
- * This must be called without the locker bucket locked.
- *
- * PUBLIC: int __lock_freefamilylocker __P((DB_LOCKTAB *, DB_LOCKER *));
- */
-int
-__lock_freefamilylocker(lt, sh_locker)
- DB_LOCKTAB *lt;
- DB_LOCKER *sh_locker;
-{
- DB_LOCKREGION *region;
- ENV *env;
- int ret;
-
- env = lt->env;
- region = lt->reginfo.primary;
-
- if (sh_locker == NULL)
- return (0);
-
- LOCK_LOCKERS(env, region);
-
- if (SH_LIST_FIRST(&sh_locker->heldby, __db_lock) != NULL) {
- ret = EINVAL;
- __db_errx(env, "Freeing locker with locks");
- goto err;
- }
-
- /* If this is part of a family, we must fix up its links. */
- if (sh_locker->master_locker != INVALID_ROFF)
- SH_LIST_REMOVE(sh_locker, child_link, __db_locker);
-
- ret = __lock_freelocker(lt, region, sh_locker);
-
-err: UNLOCK_LOCKERS(env, region);
- return (ret);
-}
-
-/*
- * __lock_freelocker
- * Common code for deleting a locker; must be called with the
- * locker bucket locked.
- *
- * PUBLIC: int __lock_freelocker
- * PUBLIC: __P((DB_LOCKTAB *, DB_LOCKREGION *, DB_LOCKER *));
- */
-int
-__lock_freelocker(lt, region, sh_locker)
- DB_LOCKTAB *lt;
- DB_LOCKREGION *region;
- DB_LOCKER *sh_locker;
-
-{
- u_int32_t indx;
- LOCKER_HASH(lt, region, sh_locker->id, indx);
- SH_TAILQ_REMOVE(&lt->locker_tab[indx], sh_locker, links, __db_locker);
- SH_TAILQ_INSERT_HEAD(
- &region->free_lockers, sh_locker, links, __db_locker);
- SH_TAILQ_REMOVE(&region->lockers, sh_locker, ulinks, __db_locker);
- region->nlockers--;
- return (0);
-}
diff --git a/lock/lock_list.c b/lock/lock_list.c
deleted file mode 100644
index 88b09bd..0000000
--- a/lock/lock_list.c
+++ /dev/null
@@ -1,364 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-#include "dbinc/log.h"
-
-static int __lock_sort_cmp __P((const void *, const void *));
-
-/*
- * Lock list routines.
- * The list is composed of a 32-bit count of locks followed by
- * each lock. A lock is represented by a 16-bit page-count, a lock
- * object and a page list. A lock object consists of a 16-bit size
- * and the object itself. In a pseudo BNF notation, you get:
- *
- * LIST = COUNT32 LOCK*
- * LOCK = COUNT16 LOCKOBJ PAGELIST
- * LOCKOBJ = COUNT16 OBJ
- * PAGELIST = COUNT32*
- *
- * (Recall that X* means "0 or more X's")
- *
- * In most cases, the OBJ is a struct __db_ilock and the page list is
- * a series of (32-bit) page numbers that should get written into the
- * pgno field of the __db_ilock. So, the actual number of pages locked
- * is the number of items in the PAGELIST plus 1. If this is an application-
- * specific lock, then we cannot interpret obj and the pagelist must
- * be empty.
- *
- * Consider a lock list for: File A, pages 1&2, File B pages 3-5, Applock
- * This would be represented as:
- * 5 1 [fid=A;page=1] 2 2 [fid=B;page=3] 4 5 0 APPLOCK
- * ------------------ -------------------- ---------
- * LOCK for file A LOCK for file B application-specific lock
- */
-
-#define MAX_PGNOS 0xffff
-
-/*
- * These macros are bigger than one might expect because some compilers say a
- * cast does not return an lvalue, so constructs like *(u_int32_t*)dp = count;
- * generate warnings.
- */
-#define RET_SIZE(size, count) ((size) + \
- sizeof(u_int32_t) + (count) * 2 * sizeof(u_int16_t))
-
-#define PUT_COUNT(dp, count) do { u_int32_t __c = (count); \
- LOGCOPY_32(env, dp, &__c); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int32_t); \
- } while (0)
-#define PUT_PCOUNT(dp, count) do { u_int16_t __c = (count); \
- LOGCOPY_16(env, dp, &__c); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int16_t); \
- } while (0)
-#define PUT_SIZE(dp, size) do { u_int16_t __s = (size); \
- LOGCOPY_16(env, dp, &__s); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int16_t); \
- } while (0)
-#define PUT_PGNO(dp, pgno) do { db_pgno_t __pg = (pgno); \
- LOGCOPY_32(env, dp, &__pg); \
- dp = (u_int8_t *)dp + \
- sizeof(db_pgno_t); \
- } while (0)
-#define COPY_OBJ(dp, obj) do { \
- memcpy(dp, \
- (obj)->data, (obj)->size); \
- dp = (u_int8_t *)dp + \
- DB_ALIGN((obj)->size, \
- sizeof(u_int32_t)); \
- } while (0)
-#define GET_COUNT(dp, count) do { LOGCOPY_32(env, &count, dp); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int32_t); \
- } while (0)
-#define GET_PCOUNT(dp, count) do { LOGCOPY_16(env, &count, dp); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int16_t); \
- } while (0)
-#define GET_SIZE(dp, size) do { LOGCOPY_16(env, &size, dp); \
- dp = (u_int8_t *)dp + \
- sizeof(u_int16_t); \
- } while (0)
-#define GET_PGNO(dp, pgno) do { LOGCOPY_32(env, &pgno, dp); \
- dp = (u_int8_t *)dp + \
- sizeof(db_pgno_t); \
- } while (0)
-
-/*
- * __lock_fix_list --
- *
- * PUBLIC: int __lock_fix_list __P((ENV *, DBT *, u_int32_t));
- */
-int
-__lock_fix_list(env, list_dbt, nlocks)
- ENV *env;
- DBT *list_dbt;
- u_int32_t nlocks;
-{
- DBT *obj;
- DB_LOCK_ILOCK *lock, *plock;
- u_int32_t i, j, nfid, npgno, size;
- u_int8_t *data, *dp;
- int ret;
-
- if ((size = list_dbt->size) == 0)
- return (0);
-
- obj = (DBT *)list_dbt->data;
-
- /*
- * If necessary sort the list of locks so that locks on the same fileid
- * are together. We do not sort 1 or 2 locks because by definition if
- * there are locks on the same fileid they will be together. The sort
- * will also move any locks that do not look like page locks to the end
- * of the list so we can stop looking for locks we can combine when we
- * hit one.
- */
- switch (nlocks) {
- case 1:
- size = RET_SIZE(obj->size, 1);
- if ((ret = __os_malloc(env, size, &data)) != 0)
- return (ret);
-
- dp = data;
- PUT_COUNT(dp, 1);
- PUT_PCOUNT(dp, 0);
- PUT_SIZE(dp, obj->size);
- COPY_OBJ(dp, obj);
- break;
- default:
- /* Sort so that all locks with same fileid are together. */
- qsort(list_dbt->data, nlocks, sizeof(DBT), __lock_sort_cmp);
- /* FALLTHROUGH */
- case 2:
- nfid = npgno = 0;
- i = 0;
- if (obj->size != sizeof(DB_LOCK_ILOCK))
- goto not_ilock;
-
- nfid = 1;
- plock = (DB_LOCK_ILOCK *)obj->data;
-
- /* We use ulen to keep track of the number of pages. */
- j = 0;
- obj[0].ulen = 0;
- for (i = 1; i < nlocks; i++) {
- if (obj[i].size != sizeof(DB_LOCK_ILOCK))
- break;
- lock = (DB_LOCK_ILOCK *)obj[i].data;
- if (obj[j].ulen < MAX_PGNOS &&
- lock->type == plock->type &&
- memcmp(lock->fileid,
- plock->fileid, DB_FILE_ID_LEN) == 0) {
- obj[j].ulen++;
- npgno++;
- } else {
- nfid++;
- plock = lock;
- j = i;
- obj[j].ulen = 0;
- }
- }
-
-not_ilock: size = nfid * sizeof(DB_LOCK_ILOCK);
- size += npgno * sizeof(db_pgno_t);
- /* Add the number of nonstandard locks and get their size. */
- nfid += nlocks - i;
- for (; i < nlocks; i++) {
- size += obj[i].size;
- obj[i].ulen = 0;
- }
-
- size = RET_SIZE(size, nfid);
- if ((ret = __os_malloc(env, size, &data)) != 0)
- return (ret);
-
- dp = data;
- PUT_COUNT(dp, nfid);
-
- for (i = 0; i < nlocks; i = j) {
- PUT_PCOUNT(dp, obj[i].ulen);
- PUT_SIZE(dp, obj[i].size);
- COPY_OBJ(dp, &obj[i]);
- lock = (DB_LOCK_ILOCK *)obj[i].data;
- for (j = i + 1; j <= i + obj[i].ulen; j++) {
- lock = (DB_LOCK_ILOCK *)obj[j].data;
- PUT_PGNO(dp, lock->pgno);
- }
- }
- }
-
- __os_free(env, list_dbt->data);
-
- list_dbt->data = data;
- list_dbt->size = size;
-
- return (0);
-}
-
-/*
- * PUBLIC: int __lock_get_list __P((ENV *, DB_LOCKER *, u_int32_t,
- * PUBLIC: db_lockmode_t, DBT *));
- */
-int
-__lock_get_list(env, locker, flags, lock_mode, list)
- ENV *env;
- DB_LOCKER *locker;
- u_int32_t flags;
- db_lockmode_t lock_mode;
- DBT *list;
-{
- DBT obj_dbt;
- DB_LOCK ret_lock;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_LOCK_ILOCK *lock;
- db_pgno_t save_pgno;
- u_int16_t npgno, size;
- u_int32_t i, nlocks;
- int ret;
- void *data, *dp;
-
- if (list->size == 0)
- return (0);
- ret = 0;
- data = NULL;
-
- lt = env->lk_handle;
- dp = list->data;
-
- /*
- * There is no assurance log records will be aligned. If not, then
- * copy the data to an aligned region so the rest of the code does
- * not have to worry about it.
- */
- if ((uintptr_t)dp != DB_ALIGN((uintptr_t)dp, sizeof(u_int32_t))) {
- if ((ret = __os_malloc(env, list->size, &data)) != 0)
- return (ret);
- memcpy(data, list->data, list->size);
- dp = data;
- }
-
- region = lt->reginfo.primary;
- LOCK_SYSTEM_LOCK(lt, region);
- GET_COUNT(dp, nlocks);
-
- for (i = 0; i < nlocks; i++) {
- GET_PCOUNT(dp, npgno);
- GET_SIZE(dp, size);
- lock = (DB_LOCK_ILOCK *) dp;
- save_pgno = lock->pgno;
- obj_dbt.data = dp;
- obj_dbt.size = size;
- dp = ((u_int8_t *)dp) + DB_ALIGN(size, sizeof(u_int32_t));
- do {
- if ((ret = __lock_get_internal(lt, locker,
- flags, &obj_dbt, lock_mode, 0, &ret_lock)) != 0) {
- lock->pgno = save_pgno;
- goto err;
- }
- if (npgno != 0)
- GET_PGNO(dp, lock->pgno);
- } while (npgno-- != 0);
- lock->pgno = save_pgno;
- }
-
-err: LOCK_SYSTEM_UNLOCK(lt, region);
- if (data != NULL)
- __os_free(env, data);
- return (ret);
-}
-
-#define UINT32_CMP(A, B) ((A) == (B) ? 0 : ((A) > (B) ? 1 : -1))
-static int
-__lock_sort_cmp(a, b)
- const void *a, *b;
-{
- const DBT *d1, *d2;
- DB_LOCK_ILOCK *l1, *l2;
-
- d1 = a;
- d2 = b;
-
- /* Force all non-standard locks to sort at end. */
- if (d1->size != sizeof(DB_LOCK_ILOCK)) {
- if (d2->size != sizeof(DB_LOCK_ILOCK))
- return (UINT32_CMP(d1->size, d2->size));
- else
- return (1);
- } else if (d2->size != sizeof(DB_LOCK_ILOCK))
- return (-1);
-
- l1 = d1->data;
- l2 = d2->data;
- if (l1->type != l2->type)
- return (UINT32_CMP(l1->type, l2->type));
- return (memcmp(l1->fileid, l2->fileid, DB_FILE_ID_LEN));
-}
-
-/*
- * PUBLIC: void __lock_list_print __P((ENV *, DBT *));
- */
-void
-__lock_list_print(env, list)
- ENV *env;
- DBT *list;
-{
- DB_LOCK_ILOCK *lock;
- db_pgno_t pgno;
- u_int16_t npgno, size;
- u_int32_t i, nlocks;
- u_int8_t *fidp;
- char *fname, *dname, *p, namebuf[26];
- void *dp;
-
- if (list->size == 0)
- return;
- dp = list->data;
-
- GET_COUNT(dp, nlocks);
-
- for (i = 0; i < nlocks; i++) {
- GET_PCOUNT(dp, npgno);
- GET_SIZE(dp, size);
- lock = (DB_LOCK_ILOCK *) dp;
- fidp = lock->fileid;
- (void)__dbreg_get_name(env, fidp, &fname, &dname);
- printf("\t");
- if (fname == NULL && dname == NULL)
- printf("(%lx %lx %lx %lx %lx)",
- (u_long)fidp[0], (u_long)fidp[1], (u_long)fidp[2],
- (u_long)fidp[3], (u_long)fidp[4]);
- else {
- if (fname != NULL && dname != NULL) {
- (void)snprintf(namebuf, sizeof(namebuf),
- "%14s.%-10s", fname, dname);
- p = namebuf;
- } else if (fname != NULL)
- p = fname;
- else
- p = dname;
- printf("%-25s", p);
- }
- dp = ((u_int8_t *)dp) + DB_ALIGN(size, sizeof(u_int32_t));
- LOGCOPY_32(env, &pgno, &lock->pgno);
- do {
- printf(" %d", pgno);
- if (npgno != 0)
- GET_PGNO(dp, pgno);
- } while (npgno-- != 0);
- printf("\n");
- }
-}
diff --git a/lock/lock_method.c b/lock/lock_method.c
deleted file mode 100644
index 2193647..0000000
--- a/lock/lock_method.c
+++ /dev/null
@@ -1,536 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-
-/*
- * __lock_env_create --
- * Lock specific creation of the DB_ENV structure.
- *
- * PUBLIC: int __lock_env_create __P((DB_ENV *));
- */
-int
-__lock_env_create(dbenv)
- DB_ENV *dbenv;
-{
- u_int32_t cpu;
- /*
- * !!!
- * Our caller has not yet had the opportunity to reset the panic
- * state or turn off mutex locking, and so we can neither check
- * the panic state or acquire a mutex in the DB_ENV create path.
- */
- dbenv->lk_max = DB_LOCK_DEFAULT_N;
- dbenv->lk_max_lockers = DB_LOCK_DEFAULT_N;
- dbenv->lk_max_objects = DB_LOCK_DEFAULT_N;
-
- /*
- * Default to 10 partitions per cpu. This seems to be near
- * the point of diminishing returns on Xeon type processors.
- * Cpu count often returns the number of hyper threads and if
- * there is only one CPU you probably do not want to run partitions.
- */
- cpu = __os_cpu_count();
- dbenv->lk_partitions = cpu > 1 ? 10 * cpu : 1;
-
- return (0);
-}
-
-/*
- * __lock_env_destroy --
- * Lock specific destruction of the DB_ENV structure.
- *
- * PUBLIC: void __lock_env_destroy __P((DB_ENV *));
- */
-void
-__lock_env_destroy(dbenv)
- DB_ENV *dbenv;
-{
- ENV *env;
-
- env = dbenv->env;
-
- if (dbenv->lk_conflicts != NULL) {
- __os_free(env, dbenv->lk_conflicts);
- dbenv->lk_conflicts = NULL;
- }
-}
-
-/*
- * __lock_get_lk_conflicts
- * Get the conflicts matrix.
- *
- * PUBLIC: int __lock_get_lk_conflicts
- * PUBLIC: __P((DB_ENV *, const u_int8_t **, int *));
- */
-int
-__lock_get_lk_conflicts(dbenv, lk_conflictsp, lk_modesp)
- DB_ENV *dbenv;
- const u_int8_t **lk_conflictsp;
- int *lk_modesp;
-{
- DB_LOCKTAB *lt;
- ENV *env;
-
- env = dbenv->env;
- lt = env->lk_handle;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_conflicts", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- /* Cannot be set after open, no lock required to read. */
- if (lk_conflictsp != NULL)
- *lk_conflictsp = lt->conflicts;
- if (lk_modesp != NULL)
- *lk_modesp = ((DB_LOCKREGION *)
- (lt->reginfo.primary))->nmodes;
- } else {
- if (lk_conflictsp != NULL)
- *lk_conflictsp = dbenv->lk_conflicts;
- if (lk_modesp != NULL)
- *lk_modesp = dbenv->lk_modes;
- }
- return (0);
-}
-
-/*
- * __lock_set_lk_conflicts
- * Set the conflicts matrix.
- *
- * PUBLIC: int __lock_set_lk_conflicts __P((DB_ENV *, u_int8_t *, int));
- */
-int
-__lock_set_lk_conflicts(dbenv, lk_conflicts, lk_modes)
- DB_ENV *dbenv;
- u_int8_t *lk_conflicts;
- int lk_modes;
-{
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_lk_conflicts");
-
- if (dbenv->lk_conflicts != NULL) {
- __os_free(env, dbenv->lk_conflicts);
- dbenv->lk_conflicts = NULL;
- }
- if ((ret = __os_malloc(env,
- (size_t)(lk_modes * lk_modes), &dbenv->lk_conflicts)) != 0)
- return (ret);
- memcpy(
- dbenv->lk_conflicts, lk_conflicts, (size_t)(lk_modes * lk_modes));
- dbenv->lk_modes = lk_modes;
-
- return (0);
-}
-
-/*
- * PUBLIC: int __lock_get_lk_detect __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_get_lk_detect(dbenv, lk_detectp)
- DB_ENV *dbenv;
- u_int32_t *lk_detectp;
-{
- DB_LOCKTAB *lt;
- DB_THREAD_INFO *ip;
- ENV *env;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_detect", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- lt = env->lk_handle;
- ENV_ENTER(env, ip);
- LOCK_REGION_LOCK(env);
- *lk_detectp = ((DB_LOCKREGION *)lt->reginfo.primary)->detect;
- LOCK_REGION_UNLOCK(env);
- ENV_LEAVE(env, ip);
- } else
- *lk_detectp = dbenv->lk_detect;
- return (0);
-}
-
-/*
- * __lock_set_lk_detect
- * DB_ENV->set_lk_detect.
- *
- * PUBLIC: int __lock_set_lk_detect __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_set_lk_detect(dbenv, lk_detect)
- DB_ENV *dbenv;
- u_int32_t lk_detect;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->set_lk_detect", DB_INIT_LOCK);
-
- switch (lk_detect) {
- case DB_LOCK_DEFAULT:
- case DB_LOCK_EXPIRE:
- case DB_LOCK_MAXLOCKS:
- case DB_LOCK_MAXWRITE:
- case DB_LOCK_MINLOCKS:
- case DB_LOCK_MINWRITE:
- case DB_LOCK_OLDEST:
- case DB_LOCK_RANDOM:
- case DB_LOCK_YOUNGEST:
- break;
- default:
- __db_errx(env,
- "DB_ENV->set_lk_detect: unknown deadlock detection mode specified");
- return (EINVAL);
- }
-
- ret = 0;
- if (LOCKING_ON(env)) {
- ENV_ENTER(env, ip);
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- LOCK_REGION_LOCK(env);
- /*
- * Check for incompatible automatic deadlock detection requests.
- * There are scenarios where changing the detector configuration
- * is reasonable, but we disallow them guessing it is likely to
- * be an application error.
- *
- * We allow applications to turn on the lock detector, and we
- * ignore attempts to set it to the default or current value.
- */
- if (region->detect != DB_LOCK_NORUN &&
- lk_detect != DB_LOCK_DEFAULT &&
- region->detect != lk_detect) {
- __db_errx(env,
- "DB_ENV->set_lk_detect: incompatible deadlock detector mode");
- ret = EINVAL;
- } else
- if (region->detect == DB_LOCK_NORUN)
- region->detect = lk_detect;
- LOCK_REGION_UNLOCK(env);
- ENV_LEAVE(env, ip);
- } else
- dbenv->lk_detect = lk_detect;
-
- return (ret);
-}
-
-/*
- * PUBLIC: int __lock_get_lk_max_locks __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_get_lk_max_locks(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_maxlocks", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- /* Cannot be set after open, no lock required to read. */
- *lk_maxp = ((DB_LOCKREGION *)
- env->lk_handle->reginfo.primary)->stat.st_maxlocks;
- } else
- *lk_maxp = dbenv->lk_max;
- return (0);
-}
-
-/*
- * __lock_set_lk_max_locks
- * DB_ENV->set_lk_max_locks.
- *
- * PUBLIC: int __lock_set_lk_max_locks __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_set_lk_max_locks(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_lk_max_locks");
-
- dbenv->lk_max = lk_max;
- return (0);
-}
-
-/*
- * PUBLIC: int __lock_get_lk_max_lockers __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_get_lk_max_lockers(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_max_lockers", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- /* Cannot be set after open, no lock required to read. */
- *lk_maxp = ((DB_LOCKREGION *)
- env->lk_handle->reginfo.primary)->stat.st_maxlockers;
- } else
- *lk_maxp = dbenv->lk_max_lockers;
- return (0);
-}
-
-/*
- * __lock_set_lk_max_lockers
- * DB_ENV->set_lk_max_lockers.
- *
- * PUBLIC: int __lock_set_lk_max_lockers __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_set_lk_max_lockers(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_lk_max_lockers");
-
- dbenv->lk_max_lockers = lk_max;
- return (0);
-}
-
-/*
- * PUBLIC: int __lock_get_lk_max_objects __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_get_lk_max_objects(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_max_objects", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- /* Cannot be set after open, no lock required to read. */
- *lk_maxp = ((DB_LOCKREGION *)
- env->lk_handle->reginfo.primary)->stat.st_maxobjects;
- } else
- *lk_maxp = dbenv->lk_max_objects;
- return (0);
-}
-
-/*
- * __lock_set_lk_max_objects
- * DB_ENV->set_lk_max_objects.
- *
- * PUBLIC: int __lock_set_lk_max_objects __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_set_lk_max_objects(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_lk_max_objects");
-
- dbenv->lk_max_objects = lk_max;
- return (0);
-}
-/*
- * PUBLIC: int __lock_get_lk_partitions __P((DB_ENV *, u_int32_t *));
- */
-int
-__lock_get_lk_partitions(dbenv, lk_partitionp)
- DB_ENV *dbenv;
- u_int32_t *lk_partitionp;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_lk_partitions", DB_INIT_LOCK);
-
- if (LOCKING_ON(env)) {
- /* Cannot be set after open, no lock required to read. */
- *lk_partitionp = ((DB_LOCKREGION *)
- env->lk_handle->reginfo.primary)->stat.st_partitions;
- } else
- *lk_partitionp = dbenv->lk_partitions;
- return (0);
-}
-
-/*
- * __lock_set_lk_partitions
- * DB_ENV->set_lk_partitions.
- *
- * PUBLIC: int __lock_set_lk_partitions __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_set_lk_partitions(dbenv, lk_partitions)
- DB_ENV *dbenv;
- u_int32_t lk_partitions;
-{
- ENV *env;
-
- env = dbenv->env;
-
- ENV_ILLEGAL_AFTER_OPEN(env, "DB_ENV->set_lk_partitions");
-
- dbenv->lk_partitions = lk_partitions;
- return (0);
-}
-
-/*
- * PUBLIC: int __lock_get_env_timeout
- * PUBLIC: __P((DB_ENV *, db_timeout_t *, u_int32_t));
- */
-int
-__lock_get_env_timeout(dbenv, timeoutp, flag)
- DB_ENV *dbenv;
- db_timeout_t *timeoutp;
- u_int32_t flag;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->get_env_timeout", DB_INIT_LOCK);
-
- ret = 0;
- if (LOCKING_ON(env)) {
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- ENV_ENTER(env, ip);
- LOCK_REGION_LOCK(env);
- switch (flag) {
- case DB_SET_LOCK_TIMEOUT:
- *timeoutp = region->lk_timeout;
- break;
- case DB_SET_TXN_TIMEOUT:
- *timeoutp = region->tx_timeout;
- break;
- default:
- ret = 1;
- break;
- }
- LOCK_REGION_UNLOCK(env);
- ENV_LEAVE(env, ip);
- } else
- switch (flag) {
- case DB_SET_LOCK_TIMEOUT:
- *timeoutp = dbenv->lk_timeout;
- break;
- case DB_SET_TXN_TIMEOUT:
- *timeoutp = dbenv->tx_timeout;
- break;
- default:
- ret = 1;
- break;
- }
-
- if (ret)
- ret = __db_ferr(env, "DB_ENV->get_timeout", 0);
-
- return (ret);
-}
-
-/*
- * __lock_set_env_timeout
- * DB_ENV->set_lock_timeout.
- *
- * PUBLIC: int __lock_set_env_timeout __P((DB_ENV *, db_timeout_t, u_int32_t));
- */
-int
-__lock_set_env_timeout(dbenv, timeout, flags)
- DB_ENV *dbenv;
- db_timeout_t timeout;
- u_int32_t flags;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_NOT_CONFIGURED(env,
- env->lk_handle, "DB_ENV->set_env_timeout", DB_INIT_LOCK);
-
- ret = 0;
- if (LOCKING_ON(env)) {
- lt = env->lk_handle;
- region = lt->reginfo.primary;
- ENV_ENTER(env, ip);
- LOCK_REGION_LOCK(env);
- switch (flags) {
- case DB_SET_LOCK_TIMEOUT:
- region->lk_timeout = timeout;
- break;
- case DB_SET_TXN_TIMEOUT:
- region->tx_timeout = timeout;
- break;
- default:
- ret = 1;
- break;
- }
- LOCK_REGION_UNLOCK(env);
- ENV_LEAVE(env, ip);
- } else
- switch (flags) {
- case DB_SET_LOCK_TIMEOUT:
- dbenv->lk_timeout = timeout;
- break;
- case DB_SET_TXN_TIMEOUT:
- dbenv->tx_timeout = timeout;
- break;
- default:
- ret = 1;
- break;
- }
-
- if (ret)
- ret = __db_ferr(env, "DB_ENV->set_timeout", 0);
-
- return (ret);
-}
diff --git a/lock/lock_region.c b/lock/lock_region.c
deleted file mode 100644
index f26960a..0000000
--- a/lock/lock_region.c
+++ /dev/null
@@ -1,479 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-
-static int __lock_region_init __P((ENV *, DB_LOCKTAB *));
-static size_t
- __lock_region_size __P((ENV *));
-
-/*
- * The conflict arrays are set up such that the row is the lock you are
- * holding and the column is the lock that is desired.
- */
-#define DB_LOCK_RIW_N 9
-static const u_int8_t db_riw_conflicts[] = {
-/* N R W WT IW IR RIW DR WW */
-/* N */ 0, 0, 0, 0, 0, 0, 0, 0, 0,
-/* R */ 0, 0, 1, 0, 1, 0, 1, 0, 1,
-/* W */ 0, 1, 1, 1, 1, 1, 1, 1, 1,
-/* WT */ 0, 0, 0, 0, 0, 0, 0, 0, 0,
-/* IW */ 0, 1, 1, 0, 0, 0, 0, 1, 1,
-/* IR */ 0, 0, 1, 0, 0, 0, 0, 0, 1,
-/* RIW */ 0, 1, 1, 0, 0, 0, 0, 1, 1,
-/* DR */ 0, 0, 1, 0, 1, 0, 1, 0, 0,
-/* WW */ 0, 1, 1, 0, 1, 1, 1, 0, 1
-};
-
-/*
- * This conflict array is used for concurrent db access (CDB). It uses
- * the same locks as the db_riw_conflicts array, but adds an IW mode to
- * be used for write cursors.
- */
-#define DB_LOCK_CDB_N 5
-static const u_int8_t db_cdb_conflicts[] = {
- /* N R W WT IW */
- /* N */ 0, 0, 0, 0, 0,
- /* R */ 0, 0, 1, 0, 0,
- /* W */ 0, 1, 1, 1, 1,
- /* WT */ 0, 0, 0, 0, 0,
- /* IW */ 0, 0, 1, 0, 1
-};
-
-/*
- * __lock_open --
- * Internal version of lock_open: only called from ENV->open.
- *
- * PUBLIC: int __lock_open __P((ENV *, int));
- */
-int
-__lock_open(env, create_ok)
- ENV *env;
- int create_ok;
-{
- DB_ENV *dbenv;
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- size_t size;
- int region_locked, ret;
-
- dbenv = env->dbenv;
- region_locked = 0;
-
- /* Create the lock table structure. */
- if ((ret = __os_calloc(env, 1, sizeof(DB_LOCKTAB), &lt)) != 0)
- return (ret);
- lt->env = env;
-
- /* Join/create the lock region. */
- lt->reginfo.env = env;
- lt->reginfo.type = REGION_TYPE_LOCK;
- lt->reginfo.id = INVALID_REGION_ID;
- lt->reginfo.flags = REGION_JOIN_OK;
- if (create_ok)
- F_SET(&lt->reginfo, REGION_CREATE_OK);
-
- /* Make sure there is at least one object and lock per partition. */
- if (dbenv->lk_max_objects < dbenv->lk_partitions)
- dbenv->lk_max_objects = dbenv->lk_partitions;
- if (dbenv->lk_max < dbenv->lk_partitions)
- dbenv->lk_max = dbenv->lk_partitions;
- size = __lock_region_size(env);
- if ((ret = __env_region_attach(env, &lt->reginfo, size)) != 0)
- goto err;
-
- /* If we created the region, initialize it. */
- if (F_ISSET(&lt->reginfo, REGION_CREATE))
- if ((ret = __lock_region_init(env, lt)) != 0)
- goto err;
-
- /* Set the local addresses. */
- region = lt->reginfo.primary =
- R_ADDR(&lt->reginfo, lt->reginfo.rp->primary);
-
- /* Set remaining pointers into region. */
- lt->conflicts = R_ADDR(&lt->reginfo, region->conf_off);
- lt->obj_tab = R_ADDR(&lt->reginfo, region->obj_off);
-#ifdef HAVE_STATISTICS
- lt->obj_stat = R_ADDR(&lt->reginfo, region->stat_off);
-#endif
- lt->part_array = R_ADDR(&lt->reginfo, region->part_off);
- lt->locker_tab = R_ADDR(&lt->reginfo, region->locker_off);
-
- env->lk_handle = lt;
-
- LOCK_REGION_LOCK(env);
- region_locked = 1;
-
- if (dbenv->lk_detect != DB_LOCK_NORUN) {
- /*
- * Check for incompatible automatic deadlock detection requests.
- * There are scenarios where changing the detector configuration
- * is reasonable, but we disallow them guessing it is likely to
- * be an application error.
- *
- * We allow applications to turn on the lock detector, and we
- * ignore attempts to set it to the default or current value.
- */
- if (region->detect != DB_LOCK_NORUN &&
- dbenv->lk_detect != DB_LOCK_DEFAULT &&
- region->detect != dbenv->lk_detect) {
- __db_errx(env,
- "lock_open: incompatible deadlock detector mode");
- ret = EINVAL;
- goto err;
- }
- if (region->detect == DB_LOCK_NORUN)
- region->detect = dbenv->lk_detect;
- }
-
- /*
- * A process joining the region may have reset the lock and transaction
- * timeouts.
- */
- if (dbenv->lk_timeout != 0)
- region->lk_timeout = dbenv->lk_timeout;
- if (dbenv->tx_timeout != 0)
- region->tx_timeout = dbenv->tx_timeout;
-
- LOCK_REGION_UNLOCK(env);
- region_locked = 0;
-
- return (0);
-
-err: if (lt->reginfo.addr != NULL) {
- if (region_locked)
- LOCK_REGION_UNLOCK(env);
- (void)__env_region_detach(env, &lt->reginfo, 0);
- }
- env->lk_handle = NULL;
-
- __os_free(env, lt);
- return (ret);
-}
-
-/*
- * __lock_region_init --
- * Initialize the lock region.
- */
-static int
-__lock_region_init(env, lt)
- ENV *env;
- DB_LOCKTAB *lt;
-{
- const u_int8_t *lk_conflicts;
- struct __db_lock *lp;
- DB_ENV *dbenv;
- DB_LOCKER *lidp;
- DB_LOCKOBJ *op;
- DB_LOCKREGION *region;
- DB_LOCKPART *part;
- u_int32_t extra_locks, extra_objects, i, j, max;
- u_int8_t *addr;
- int lk_modes, ret;
-
- dbenv = env->dbenv;
-
- if ((ret = __env_alloc(&lt->reginfo,
- sizeof(DB_LOCKREGION), &lt->reginfo.primary)) != 0)
- goto mem_err;
- lt->reginfo.rp->primary = R_OFFSET(&lt->reginfo, lt->reginfo.primary);
- region = lt->reginfo.primary;
- memset(region, 0, sizeof(*region));
-
- if ((ret = __mutex_alloc(
- env, MTX_LOCK_REGION, 0, &region->mtx_region)) != 0)
- return (ret);
-
- /* Select a conflict matrix if none specified. */
- if (dbenv->lk_modes == 0)
- if (CDB_LOCKING(env)) {
- lk_modes = DB_LOCK_CDB_N;
- lk_conflicts = db_cdb_conflicts;
- } else {
- lk_modes = DB_LOCK_RIW_N;
- lk_conflicts = db_riw_conflicts;
- }
- else {
- lk_modes = dbenv->lk_modes;
- lk_conflicts = dbenv->lk_conflicts;
- }
-
- region->need_dd = 0;
- timespecclear(&region->next_timeout);
- region->detect = DB_LOCK_NORUN;
- region->lk_timeout = dbenv->lk_timeout;
- region->tx_timeout = dbenv->tx_timeout;
- region->locker_t_size = __db_tablesize(dbenv->lk_max_lockers);
- region->object_t_size = __db_tablesize(dbenv->lk_max_objects);
- region->part_t_size = dbenv->lk_partitions;
- region->lock_id = 0;
- region->cur_maxid = DB_LOCK_MAXID;
- region->nmodes = lk_modes;
- memset(&region->stat, 0, sizeof(region->stat));
- region->stat.st_maxlocks = dbenv->lk_max;
- region->stat.st_maxlockers = dbenv->lk_max_lockers;
- region->stat.st_maxobjects = dbenv->lk_max_objects;
- region->stat.st_partitions = dbenv->lk_partitions;
-
- /* Allocate room for the conflict matrix and initialize it. */
- if ((ret = __env_alloc(
- &lt->reginfo, (size_t)(lk_modes * lk_modes), &addr)) != 0)
- goto mem_err;
- memcpy(addr, lk_conflicts, (size_t)(lk_modes * lk_modes));
- region->conf_off = R_OFFSET(&lt->reginfo, addr);
-
- /* Allocate room for the object hash table and initialize it. */
- if ((ret = __env_alloc(&lt->reginfo,
- region->object_t_size * sizeof(DB_HASHTAB), &addr)) != 0)
- goto mem_err;
- __db_hashinit(addr, region->object_t_size);
- region->obj_off = R_OFFSET(&lt->reginfo, addr);
-
- /* Allocate room for the object hash stats table and initialize it. */
- if ((ret = __env_alloc(&lt->reginfo,
- region->object_t_size * sizeof(DB_LOCK_HSTAT), &addr)) != 0)
- goto mem_err;
- memset(addr, 0, region->object_t_size * sizeof(DB_LOCK_HSTAT));
- region->stat_off = R_OFFSET(&lt->reginfo, addr);
-
- /* Allocate room for the partition table and initialize its mutexes. */
- if ((ret = __env_alloc(&lt->reginfo,
- region->part_t_size * sizeof(DB_LOCKPART), &part)) != 0)
- goto mem_err;
- memset(part, 0, region->part_t_size * sizeof(DB_LOCKPART));
- region->part_off = R_OFFSET(&lt->reginfo, part);
- for (i = 0; i < region->part_t_size; i++) {
- if ((ret = __mutex_alloc(
- env, MTX_LOCK_REGION, 0, &part[i].mtx_part)) != 0)
- return (ret);
- }
- if ((ret = __mutex_alloc(
- env, MTX_LOCK_REGION, 0, &region->mtx_dd)) != 0)
- return (ret);
-
- if ((ret = __mutex_alloc(
- env, MTX_LOCK_REGION, 0, &region->mtx_lockers)) != 0)
- return (ret);
-
- /* Allocate room for the locker hash table and initialize it. */
- if ((ret = __env_alloc(&lt->reginfo,
- region->locker_t_size * sizeof(DB_HASHTAB), &addr)) != 0)
- goto mem_err;
- __db_hashinit(addr, region->locker_t_size);
- region->locker_off = R_OFFSET(&lt->reginfo, addr);
-
- SH_TAILQ_INIT(&region->dd_objs);
-
- /*
- * If the locks and objects don't divide evenly, spread them around.
- */
- extra_locks = region->stat.st_maxlocks -
- ((region->stat.st_maxlocks / region->part_t_size) *
- region->part_t_size);
- extra_objects = region->stat.st_maxobjects -
- ((region->stat.st_maxobjects / region->part_t_size) *
- region->part_t_size);
- for (j = 0; j < region->part_t_size; j++) {
- /* Initialize locks onto a free list. */
- SH_TAILQ_INIT(&part[j].free_locks);
- max = region->stat.st_maxlocks / region->part_t_size;
- if (extra_locks > 0) {
- max++;
- extra_locks--;
- }
- for (i = 0; i < max; ++i) {
- if ((ret = __env_alloc(&lt->reginfo,
- sizeof(struct __db_lock), &lp)) != 0)
- goto mem_err;
- lp->mtx_lock = MUTEX_INVALID;
- lp->gen = 0;
- lp->status = DB_LSTAT_FREE;
- SH_TAILQ_INSERT_HEAD(
- &part[j].free_locks, lp, links, __db_lock);
- }
- /* Initialize objects onto a free list. */
- max = region->stat.st_maxobjects / region->part_t_size;
- if (extra_objects > 0) {
- max++;
- extra_objects--;
- }
- SH_TAILQ_INIT(&part[j].free_objs);
- for (i = 0; i < max; ++i) {
- if ((ret = __env_alloc(&lt->reginfo,
- sizeof(DB_LOCKOBJ), &op)) != 0)
- goto mem_err;
- SH_TAILQ_INSERT_HEAD(
- &part[j].free_objs, op, links, __db_lockobj);
- op->generation = 0;
- }
- }
-
- /* Initialize lockers onto a free list. */
- SH_TAILQ_INIT(&region->lockers);
- SH_TAILQ_INIT(&region->free_lockers);
- for (i = 0; i < region->stat.st_maxlockers; ++i) {
- if ((ret =
- __env_alloc(&lt->reginfo, sizeof(DB_LOCKER), &lidp)) != 0) {
-mem_err: __db_errx(env,
- "unable to allocate memory for the lock table");
- return (ret);
- }
- SH_TAILQ_INSERT_HEAD(
- &region->free_lockers, lidp, links, __db_locker);
- }
-
- lt->reginfo.mtx_alloc = region->mtx_region;
- return (0);
-}
-
-/*
- * __lock_env_refresh --
- * Clean up after the lock system on a close or failed open.
- *
- * PUBLIC: int __lock_env_refresh __P((ENV *));
- */
-int
-__lock_env_refresh(env)
- ENV *env;
-{
- struct __db_lock *lp;
- DB_LOCKER *locker;
- DB_LOCKOBJ *lockobj;
- DB_LOCKREGION *lr;
- DB_LOCKTAB *lt;
- REGINFO *reginfo;
- u_int32_t j;
- int ret;
-
- lt = env->lk_handle;
- reginfo = &lt->reginfo;
- lr = reginfo->primary;
-
- /*
- * If a private region, return the memory to the heap. Not needed for
- * filesystem-backed or system shared memory regions, that memory isn't
- * owned by any particular process.
- */
- if (F_ISSET(env, ENV_PRIVATE)) {
- reginfo->mtx_alloc = MUTEX_INVALID;
- /* Discard the conflict matrix. */
- __env_alloc_free(reginfo, R_ADDR(reginfo, lr->conf_off));
-
- /* Discard the object hash table. */
- __env_alloc_free(reginfo, R_ADDR(reginfo, lr->obj_off));
-
- /* Discard the locker hash table. */
- __env_alloc_free(reginfo, R_ADDR(reginfo, lr->locker_off));
-
- /* Discard the object hash stat table. */
- __env_alloc_free(reginfo, R_ADDR(reginfo, lr->stat_off));
-
- for (j = 0; j < lr->part_t_size; j++) {
- /* Discard locks. */
- while ((lp = SH_TAILQ_FIRST(
- &FREE_LOCKS(lt, j), __db_lock)) != NULL) {
- SH_TAILQ_REMOVE(&FREE_LOCKS(lt, j),
- lp, links, __db_lock);
- __env_alloc_free(reginfo, lp);
- }
-
- /* Discard objects. */
- while ((lockobj = SH_TAILQ_FIRST(
- &FREE_OBJS(lt, j), __db_lockobj)) != NULL) {
- SH_TAILQ_REMOVE(&FREE_OBJS(lt, j),
- lockobj, links, __db_lockobj);
- __env_alloc_free(reginfo, lockobj);
- }
- }
-
- /* Discard the object partition array. */
- __env_alloc_free(reginfo, R_ADDR(reginfo, lr->part_off));
-
- /* Discard lockers. */
- while ((locker =
- SH_TAILQ_FIRST(&lr->free_lockers, __db_locker)) != NULL) {
- SH_TAILQ_REMOVE(
- &lr->free_lockers, locker, links, __db_locker);
- __env_alloc_free(reginfo, locker);
- }
- }
-
- /* Detach from the region. */
- ret = __env_region_detach(env, reginfo, 0);
-
- /* Discard DB_LOCKTAB. */
- __os_free(env, lt);
- env->lk_handle = NULL;
-
- return (ret);
-}
-
-/*
- * __lock_region_mutex_count --
- * Return the number of mutexes the lock region will need.
- *
- * PUBLIC: u_int32_t __lock_region_mutex_count __P((ENV *));
- */
-u_int32_t
-__lock_region_mutex_count(env)
- ENV *env;
-{
- DB_ENV *dbenv;
-
- dbenv = env->dbenv;
-
- return (dbenv->lk_max + dbenv->lk_partitions + 3);
-}
-
-/*
- * __lock_region_size --
- * Return the region size.
- */
-static size_t
-__lock_region_size(env)
- ENV *env;
-{
- DB_ENV *dbenv;
- size_t retval;
-
- dbenv = env->dbenv;
-
- /*
- * Figure out how much space we're going to need. This list should
- * map one-to-one with the __env_alloc calls in __lock_region_init.
- */
- retval = 0;
- retval += __env_alloc_size(sizeof(DB_LOCKREGION));
- retval += __env_alloc_size((size_t)(dbenv->lk_modes * dbenv->lk_modes));
- retval += __env_alloc_size(
- __db_tablesize(dbenv->lk_max_objects) * (sizeof(DB_HASHTAB)));
- retval += __env_alloc_size(
- __db_tablesize(dbenv->lk_max_lockers) * (sizeof(DB_HASHTAB)));
- retval += __env_alloc_size(
- __db_tablesize(dbenv->lk_max_objects) * (sizeof(DB_LOCK_HSTAT)));
- retval +=
- __env_alloc_size(dbenv->lk_partitions * (sizeof(DB_LOCKPART)));
- retval += __env_alloc_size(sizeof(struct __db_lock)) * dbenv->lk_max;
- retval += __env_alloc_size(sizeof(DB_LOCKOBJ)) * dbenv->lk_max_objects;
- retval += __env_alloc_size(sizeof(DB_LOCKER)) * dbenv->lk_max_lockers;
-
- /*
- * Include 16 bytes of string space per lock. DB doesn't use it
- * because we pre-allocate lock space for DBTs in the structure.
- */
- retval += __env_alloc_size(dbenv->lk_max * 16);
-
- /* And we keep getting this wrong, let's be generous. */
- retval += retval / 4;
-
- return (retval);
-}
diff --git a/lock/lock_stat.c b/lock/lock_stat.c
deleted file mode 100644
index ef55e87..0000000
--- a/lock/lock_stat.c
+++ /dev/null
@@ -1,751 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/db_page.h"
-#include "dbinc/lock.h"
-#include "dbinc/log.h"
-#include "dbinc/db_am.h"
-
-#ifdef HAVE_STATISTICS
-static int __lock_dump_locker
- __P((ENV *, DB_MSGBUF *, DB_LOCKTAB *, DB_LOCKER *));
-static int __lock_dump_object __P((DB_LOCKTAB *, DB_MSGBUF *, DB_LOCKOBJ *));
-static int __lock_print_all __P((ENV *, u_int32_t));
-static int __lock_print_stats __P((ENV *, u_int32_t));
-static void __lock_print_header __P((ENV *));
-static int __lock_stat __P((ENV *, DB_LOCK_STAT **, u_int32_t));
-
-/*
- * __lock_stat_pp --
- * ENV->lock_stat pre/post processing.
- *
- * PUBLIC: int __lock_stat_pp __P((DB_ENV *, DB_LOCK_STAT **, u_int32_t));
- */
-int
-__lock_stat_pp(dbenv, statp, flags)
- DB_ENV *dbenv;
- DB_LOCK_STAT **statp;
- u_int32_t flags;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_stat", DB_INIT_LOCK);
-
- if ((ret = __db_fchk(env,
- "DB_ENV->lock_stat", flags, DB_STAT_CLEAR)) != 0)
- return (ret);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env, (__lock_stat(env, statp, flags)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_stat --
- * ENV->lock_stat.
- */
-static int
-__lock_stat(env, statp, flags)
- ENV *env;
- DB_LOCK_STAT **statp;
- u_int32_t flags;
-{
- DB_LOCKREGION *region;
- DB_LOCKTAB *lt;
- DB_LOCK_STAT *stats, tmp;
- DB_LOCK_HSTAT htmp;
- DB_LOCK_PSTAT ptmp;
- int ret;
- u_int32_t i;
- uintmax_t tmp_wait, tmp_nowait;
-
- *statp = NULL;
- lt = env->lk_handle;
-
- if ((ret = __os_umalloc(env, sizeof(*stats), &stats)) != 0)
- return (ret);
-
- /* Copy out the global statistics. */
- LOCK_REGION_LOCK(env);
-
- region = lt->reginfo.primary;
- memcpy(stats, &region->stat, sizeof(*stats));
- stats->st_locktimeout = region->lk_timeout;
- stats->st_txntimeout = region->tx_timeout;
- stats->st_id = region->lock_id;
- stats->st_cur_maxid = region->cur_maxid;
- stats->st_nlockers = region->nlockers;
- stats->st_nmodes = region->nmodes;
-
- for (i = 0; i < region->object_t_size; i++) {
- stats->st_nrequests += lt->obj_stat[i].st_nrequests;
- stats->st_nreleases += lt->obj_stat[i].st_nreleases;
- stats->st_nupgrade += lt->obj_stat[i].st_nupgrade;
- stats->st_ndowngrade += lt->obj_stat[i].st_ndowngrade;
- stats->st_lock_wait += lt->obj_stat[i].st_lock_wait;
- stats->st_lock_nowait += lt->obj_stat[i].st_lock_nowait;
- stats->st_nlocktimeouts += lt->obj_stat[i].st_nlocktimeouts;
- stats->st_ntxntimeouts += lt->obj_stat[i].st_ntxntimeouts;
- if (stats->st_maxhlocks < lt->obj_stat[i].st_maxnlocks)
- stats->st_maxhlocks = lt->obj_stat[i].st_maxnlocks;
- if (stats->st_maxhobjects < lt->obj_stat[i].st_maxnobjects)
- stats->st_maxhobjects = lt->obj_stat[i].st_maxnobjects;
- if (stats->st_hash_len < lt->obj_stat[i].st_hash_len)
- stats->st_hash_len = lt->obj_stat[i].st_hash_len;
- if (LF_ISSET(DB_STAT_CLEAR)) {
- htmp = lt->obj_stat[i];
- memset(&lt->obj_stat[i], 0, sizeof(lt->obj_stat[i]));
- lt->obj_stat[i].st_nlocks = htmp.st_nlocks;
- lt->obj_stat[i].st_maxnlocks = htmp.st_nlocks;
- lt->obj_stat[i].st_nobjects = htmp.st_nobjects;
- lt->obj_stat[i].st_maxnobjects = htmp.st_nobjects;
-
- }
- }
-
- for (i = 0; i < region->part_t_size; i++) {
- stats->st_nlocks += lt->part_array[i].part_stat.st_nlocks;
- stats->st_maxnlocks +=
- lt->part_array[i].part_stat.st_maxnlocks;
- stats->st_nobjects += lt->part_array[i].part_stat.st_nobjects;
- stats->st_maxnobjects +=
- lt->part_array[i].part_stat.st_maxnobjects;
- stats->st_locksteals +=
- lt->part_array[i].part_stat.st_locksteals;
- if (stats->st_maxlsteals <
- lt->part_array[i].part_stat.st_locksteals)
- stats->st_maxlsteals =
- lt->part_array[i].part_stat.st_locksteals;
- stats->st_objectsteals +=
- lt->part_array[i].part_stat.st_objectsteals;
- if (stats->st_maxosteals <
- lt->part_array[i].part_stat.st_objectsteals)
- stats->st_maxosteals =
- lt->part_array[i].part_stat.st_objectsteals;
- __mutex_set_wait_info(env,
- lt->part_array[i].mtx_part, &tmp_wait, &tmp_nowait);
- stats->st_part_nowait += tmp_nowait;
- stats->st_part_wait += tmp_wait;
- if (tmp_wait > stats->st_part_max_wait) {
- stats->st_part_max_nowait = tmp_nowait;
- stats->st_part_max_wait = tmp_wait;
- }
-
- if (LF_ISSET(DB_STAT_CLEAR)) {
- ptmp = lt->part_array[i].part_stat;
- memset(&lt->part_array[i].part_stat,
- 0, sizeof(lt->part_array[i].part_stat));
- lt->part_array[i].part_stat.st_nlocks =
- ptmp.st_nlocks;
- lt->part_array[i].part_stat.st_maxnlocks =
- ptmp.st_nlocks;
- lt->part_array[i].part_stat.st_nobjects =
- ptmp.st_nobjects;
- lt->part_array[i].part_stat.st_maxnobjects =
- ptmp.st_nobjects;
- }
- }
-
- __mutex_set_wait_info(env, region->mtx_region,
- &stats->st_region_wait, &stats->st_region_nowait);
- __mutex_set_wait_info(env, region->mtx_dd,
- &stats->st_objs_wait, &stats->st_objs_nowait);
- __mutex_set_wait_info(env, region->mtx_lockers,
- &stats->st_lockers_wait, &stats->st_lockers_nowait);
- stats->st_regsize = lt->reginfo.rp->size;
- if (LF_ISSET(DB_STAT_CLEAR)) {
- tmp = region->stat;
- memset(&region->stat, 0, sizeof(region->stat));
- if (!LF_ISSET(DB_STAT_SUBSYSTEM)) {
- __mutex_clear(env, region->mtx_region);
- __mutex_clear(env, region->mtx_dd);
- __mutex_clear(env, region->mtx_lockers);
- for (i = 0; i < region->part_t_size; i++)
- __mutex_clear(env, lt->part_array[i].mtx_part);
- }
-
- region->stat.st_maxlocks = tmp.st_maxlocks;
- region->stat.st_maxlockers = tmp.st_maxlockers;
- region->stat.st_maxobjects = tmp.st_maxobjects;
- region->stat.st_nlocks =
- region->stat.st_maxnlocks = tmp.st_nlocks;
- region->stat.st_maxnlockers = region->nlockers;
- region->stat.st_nobjects =
- region->stat.st_maxnobjects = tmp.st_nobjects;
- region->stat.st_partitions = tmp.st_partitions;
- }
-
- LOCK_REGION_UNLOCK(env);
-
- *statp = stats;
- return (0);
-}
-
-/*
- * __lock_stat_print_pp --
- * ENV->lock_stat_print pre/post processing.
- *
- * PUBLIC: int __lock_stat_print_pp __P((DB_ENV *, u_int32_t));
- */
-int
-__lock_stat_print_pp(dbenv, flags)
- DB_ENV *dbenv;
- u_int32_t flags;
-{
- DB_THREAD_INFO *ip;
- ENV *env;
- int ret;
-
- env = dbenv->env;
-
- ENV_REQUIRES_CONFIG(env,
- env->lk_handle, "DB_ENV->lock_stat_print", DB_INIT_LOCK);
-
-#define DB_STAT_LOCK_FLAGS \
- (DB_STAT_ALL | DB_STAT_CLEAR | DB_STAT_LOCK_CONF | \
- DB_STAT_LOCK_LOCKERS | DB_STAT_LOCK_OBJECTS | DB_STAT_LOCK_PARAMS)
- if ((ret = __db_fchk(env, "DB_ENV->lock_stat_print",
- flags, DB_STAT_CLEAR | DB_STAT_LOCK_FLAGS)) != 0)
- return (ret);
-
- ENV_ENTER(env, ip);
- REPLICATION_WRAP(env, (__lock_stat_print(env, flags)), 0, ret);
- ENV_LEAVE(env, ip);
- return (ret);
-}
-
-/*
- * __lock_stat_print --
- * ENV->lock_stat_print method.
- *
- * PUBLIC: int __lock_stat_print __P((ENV *, u_int32_t));
- */
-int
-__lock_stat_print(env, flags)
- ENV *env;
- u_int32_t flags;
-{
- u_int32_t orig_flags;
- int ret;
-
- orig_flags = flags;
- LF_CLR(DB_STAT_CLEAR | DB_STAT_SUBSYSTEM);
- if (flags == 0 || LF_ISSET(DB_STAT_ALL)) {
- ret = __lock_print_stats(env, orig_flags);
- if (flags == 0 || ret != 0)
- return (ret);
- }
-
- if (LF_ISSET(DB_STAT_ALL | DB_STAT_LOCK_CONF | DB_STAT_LOCK_LOCKERS |
- DB_STAT_LOCK_OBJECTS | DB_STAT_LOCK_PARAMS) &&
- (ret = __lock_print_all(env, orig_flags)) != 0)
- return (ret);
-
- return (0);
-}
-
-/*
- * __lock_print_stats --
- * Display default lock region statistics.
- */
-static int
-__lock_print_stats(env, flags)
- ENV *env;
- u_int32_t flags;
-{
- DB_LOCK_STAT *sp;
- int ret;
-
-#ifdef LOCK_DIAGNOSTIC
- DB_LOCKTAB *lt;
- DB_LOCKREGION *region;
- u_int32_t i;
- u_int32_t wait, nowait;
-
- lt = env->lk_handle;
- region = lt->reginfo.primary;
-
- for (i = 0; i < region->object_t_size; i++) {
- if (lt->obj_stat[i].st_hash_len == 0)
- continue;
- __db_dl(env,
- "Hash bucket", (u_long)i);
- __db_dl(env, "Partition", (u_long)LOCK_PART(region, i));
- __mutex_set_wait_info(env,
- lt->part_array[LOCK_PART(region, i)].mtx_part,
- &wait, &nowait);
- __db_dl_pct(env,
- "The number of partition mutex requests that required waiting",
- (u_long)wait, DB_PCT(wait, wait + nowait), NULL);
- __db_dl(env,
- "Maximum hash bucket length",
- (u_long)lt->obj_stat[i].st_hash_len);
- __db_dl(env,
- "Total number of locks requested",
- (u_long)lt->obj_stat[i].st_nrequests);
- __db_dl(env,
- "Total number of locks released",
- (u_long)lt->obj_stat[i].st_nreleases);
- __db_dl(env,
- "Total number of locks upgraded",
- (u_long)lt->obj_stat[i].st_nupgrade);
- __db_dl(env,
- "Total number of locks downgraded",
- (u_long)lt->obj_stat[i].st_ndowngrade);
- __db_dl(env,
- "Lock requests not available due to conflicts, for which we waited",
- (u_long)lt->obj_stat[i].st_lock_wait);
- __db_dl(env,
- "Lock requests not available due to conflicts, for which we did not wait",
- (u_long)lt->obj_stat[i].st_lock_nowait);
- __db_dl(env, "Number of locks that have timed out",
- (u_long)lt->obj_stat[i].st_nlocktimeouts);
- __db_dl(env, "Number of transactions that have timed out",
- (u_long)lt->obj_stat[i].st_ntxntimeouts);
- }
-#endif
- if ((ret = __lock_stat(env, &sp, flags)) != 0)
- return (ret);
-
- if (LF_ISSET(DB_STAT_ALL))
- __db_msg(env, "Default locking region information:");
- __db_dl(env, "Last allocated locker ID", (u_long)sp->st_id);
- __db_msg(env, "%#lx\tCurrent maximum unused locker ID",
- (u_long)sp->st_cur_maxid);
- __db_dl(env, "Number of lock modes", (u_long)sp->st_nmodes);
- __db_dl(env,
- "Maximum number of locks possible", (u_long)sp->st_maxlocks);
- __db_dl(env,
- "Maximum number of lockers possible", (u_long)sp->st_maxlockers);
- __db_dl(env, "Maximum number of lock objects possible",
- (u_long)sp->st_maxobjects);
- __db_dl(env, "Number of lock object partitions",
- (u_long)sp->st_partitions);
- __db_dl(env, "Number of current locks", (u_long)sp->st_nlocks);
- __db_dl(env, "Maximum number of locks at any one time",
- (u_long)sp->st_maxnlocks);
- __db_dl(env, "Maximum number of locks in any one bucket",
- (u_long)sp->st_maxhlocks);
- __db_dl(env, "Maximum number of locks stolen by for an empty partition",
- (u_long)sp->st_locksteals);
- __db_dl(env, "Maximum number of locks stolen for any one partition",
- (u_long)sp->st_maxlsteals);
- __db_dl(env, "Number of current lockers", (u_long)sp->st_nlockers);
- __db_dl(env, "Maximum number of lockers at any one time",
- (u_long)sp->st_maxnlockers);
- __db_dl(env,
- "Number of current lock objects", (u_long)sp->st_nobjects);
- __db_dl(env, "Maximum number of lock objects at any one time",
- (u_long)sp->st_maxnobjects);
- __db_dl(env, "Maximum number of lock objects in any one bucket",
- (u_long)sp->st_maxhobjects);
- __db_dl(env,
- "Maximum number of objects stolen by for an empty partition",
- (u_long)sp->st_objectsteals);
- __db_dl(env, "Maximum number of objects stolen for any one partition",
- (u_long)sp->st_maxosteals);
- __db_dl(env,
- "Total number of locks requested", (u_long)sp->st_nrequests);
- __db_dl(env,
- "Total number of locks released", (u_long)sp->st_nreleases);
- __db_dl(env,
- "Total number of locks upgraded", (u_long)sp->st_nupgrade);
- __db_dl(env,
- "Total number of locks downgraded", (u_long)sp->st_ndowngrade);
- __db_dl(env,
- "Lock requests not available due to conflicts, for which we waited",
- (u_long)sp->st_lock_wait);
- __db_dl(env,
- "Lock requests not available due to conflicts, for which we did not wait",
- (u_long)sp->st_lock_nowait);
- __db_dl(env, "Number of deadlocks", (u_long)sp->st_ndeadlocks);
- __db_dl(env, "Lock timeout value", (u_long)sp->st_locktimeout);
- __db_dl(env, "Number of locks that have timed out",
- (u_long)sp->st_nlocktimeouts);
- __db_dl(env,
- "Transaction timeout value", (u_long)sp->st_txntimeout);
- __db_dl(env, "Number of transactions that have timed out",
- (u_long)sp->st_ntxntimeouts);
-
- __db_dlbytes(env, "The size of the lock region",
- (u_long)0, (u_long)0, (u_long)sp->st_regsize);
- __db_dl_pct(env,
- "The number of partition locks that required waiting",
- (u_long)sp->st_part_wait, DB_PCT(
- sp->st_part_wait, sp->st_part_wait + sp->st_part_nowait), NULL);
- __db_dl_pct(env,
- "The maximum number of times any partition lock was waited for",
- (u_long)sp->st_part_max_wait, DB_PCT(sp->st_part_max_wait,
- sp->st_part_max_wait + sp->st_part_max_nowait), NULL);
- __db_dl_pct(env,
- "The number of object queue operations that required waiting",
- (u_long)sp->st_objs_wait, DB_PCT(sp->st_objs_wait,
- sp->st_objs_wait + sp->st_objs_nowait), NULL);
- __db_dl_pct(env,
- "The number of locker allocations that required waiting",
- (u_long)sp->st_lockers_wait, DB_PCT(sp->st_lockers_wait,
- sp->st_lockers_wait + sp->st_lockers_nowait), NULL);
- __db_dl_pct(env,
- "The number of region locks that required waiting",
- (u_long)sp->st_region_wait, DB_PCT(sp->st_region_wait,
- sp->st_region_wait + sp->st_region_nowait), NULL);
- __db_dl(env, "Maximum hash bucket length",
- (u_long)sp->st_hash_len);
-
- __os_ufree(env, sp);
-
- return (0);
-}
-
-/*
- * __lock_print_all --
- * Display debugging lock region statistics.
- */
-static int
-__lock_print_all(env, flags)
- ENV *env;
- u_int32_t flags;
-{
- DB_LOCKER *lip;
- DB_LOCKOBJ *op;
- DB_LOCKREGION *lrp;
- DB_LOCKTAB *lt;
- DB_MSGBUF mb;
- int i, j;
- u_int32_t k;
-
- lt = env->lk_handle;
- lrp = lt->reginfo.primary;
- DB_MSGBUF_INIT(&mb);
-
- LOCK_REGION_LOCK(env);
- __db_print_reginfo(env, &lt->reginfo, "Lock", flags);
-
- if (LF_ISSET(DB_STAT_ALL | DB_STAT_LOCK_PARAMS)) {
- __db_msg(env, "%s", DB_GLOBAL(db_line));
- __db_msg(env, "Lock region parameters:");
- __mutex_print_debug_single(env,
- "Lock region region mutex", lrp->mtx_region, flags);
- STAT_ULONG("locker table size", lrp->locker_t_size);
- STAT_ULONG("object table size", lrp->object_t_size);
- STAT_ULONG("obj_off", lrp->obj_off);
- STAT_ULONG("locker_off", lrp->locker_off);
- STAT_ULONG("need_dd", lrp->need_dd);
- if (timespecisset(&lrp->next_timeout)) {
-#ifdef HAVE_STRFTIME
- time_t t = (time_t)lrp->next_timeout.tv_sec;
- char tbuf[64];
- if (strftime(tbuf, sizeof(tbuf),
- "%m-%d-%H:%M:%S", localtime(&t)) != 0)
- __db_msg(env, "next_timeout: %s.%09lu",
- tbuf, (u_long)lrp->next_timeout.tv_nsec);
- else
-#endif
- __db_msg(env, "next_timeout: %lu.%09lu",
- (u_long)lrp->next_timeout.tv_sec,
- (u_long)lrp->next_timeout.tv_nsec);
- }
- }
-
- if (LF_ISSET(DB_STAT_ALL | DB_STAT_LOCK_CONF)) {
- __db_msg(env, "%s", DB_GLOBAL(db_line));
- __db_msg(env, "Lock conflict matrix:");
- for (i = 0; i < lrp->stat.st_nmodes; i++) {
- for (j = 0; j < lrp->stat.st_nmodes; j++)
- __db_msgadd(env, &mb, "%lu\t", (u_long)
- lt->conflicts[i * lrp->stat.st_nmodes + j]);
- DB_MSGBUF_FLUSH(env, &mb);
- }
- }
- LOCK_REGION_UNLOCK(env);
-
- if (LF_ISSET(DB_STAT_ALL | DB_STAT_LOCK_LOCKERS)) {
- __db_msg(env, "%s", DB_GLOBAL(db_line));
- __db_msg(env, "Locks grouped by lockers:");
- __lock_print_header(env);
- LOCK_LOCKERS(env, lrp);
- for (k = 0; k < lrp->locker_t_size; k++)
- SH_TAILQ_FOREACH(
- lip, &lt->locker_tab[k], links, __db_locker)
- (void)__lock_dump_locker(env, &mb, lt, lip);
- UNLOCK_LOCKERS(env, lrp);
- }
-
- if (LF_ISSET(DB_STAT_ALL | DB_STAT_LOCK_OBJECTS)) {
- __db_msg(env, "%s", DB_GLOBAL(db_line));
- __db_msg(env, "Locks grouped by object:");
- __lock_print_header(env);
- for (k = 0; k < lrp->object_t_size; k++) {
- OBJECT_LOCK_NDX(lt, lrp, k);
- SH_TAILQ_FOREACH(
- op, &lt->obj_tab[k], links, __db_lockobj) {
- (void)__lock_dump_object(lt, &mb, op);
- __db_msg(env, "%s", "");
- }
- OBJECT_UNLOCK(lt, lrp, k);
- }
- }
-
- return (0);
-}
-
-static int
-__lock_dump_locker(env, mbp, lt, lip)
- ENV *env;
- DB_MSGBUF *mbp;
- DB_LOCKTAB *lt;
- DB_LOCKER *lip;
-{
- DB_LOCKREGION *lrp;
- struct __db_lock *lp;
- char buf[DB_THREADID_STRLEN];
- u_int32_t ndx;
-
- lrp = lt->reginfo.primary;
-
- __db_msgadd(env,
- mbp, "%8lx dd=%2ld locks held %-4d write locks %-4d pid/thread %s",
- (u_long)lip->id, (long)lip->dd_id, lip->nlocks, lip->nwrites,
- env->dbenv->thread_id_string(env->dbenv, lip->pid, lip->tid, buf));
- if (timespecisset(&lip->tx_expire)) {
-#ifdef HAVE_STRFTIME
- time_t t = (time_t)lip->tx_expire.tv_sec;
- char tbuf[64];
- if (strftime(tbuf, sizeof(tbuf),
- "%m-%d-%H:%M:%S", localtime(&t)) != 0)
- __db_msgadd(env, mbp, "expires %s.%09lu",
- tbuf, (u_long)lip->tx_expire.tv_nsec);
- else
-#endif
- __db_msgadd(env, mbp, "expires %lu.%09lu",
- (u_long)lip->tx_expire.tv_sec,
- (u_long)lip->tx_expire.tv_nsec);
- }
- if (F_ISSET(lip, DB_LOCKER_TIMEOUT))
- __db_msgadd(
- env, mbp, " lk timeout %lu", (u_long)lip->lk_timeout);
- if (timespecisset(&lip->lk_expire)) {
-#ifdef HAVE_STRFTIME
- time_t t = (time_t)lip->lk_expire.tv_sec;
- char tbuf[64];
- if (strftime(tbuf,
- sizeof(tbuf), "%m-%d-%H:%M:%S", localtime(&t)) != 0)
- __db_msgadd(env, mbp, " lk expires %s.%09lu",
- tbuf, (u_long)lip->lk_expire.tv_nsec);
- else
-#endif
- __db_msgadd(env, mbp, " lk expires %lu.%09lu",
- (u_long)lip->lk_expire.tv_sec,
- (u_long)lip->lk_expire.tv_nsec);
- }
- DB_MSGBUF_FLUSH(env, mbp);
-
- /*
- * We need some care here since the list may change while we
- * look.
- */
-retry: SH_LIST_FOREACH(lp, &lip->heldby, locker_links, __db_lock) {
- if (!SH_LIST_EMPTY(&lip->heldby) && lp != NULL) {
- ndx = lp->indx;
- OBJECT_LOCK_NDX(lt, lrp, ndx);
- if (lp->indx == ndx)
- __lock_printlock(lt, mbp, lp, 1);
- else {
- OBJECT_UNLOCK(lt, lrp, ndx);
- goto retry;
- }
- OBJECT_UNLOCK(lt, lrp, ndx);
- }
- }
- return (0);
-}
-
-static int
-__lock_dump_object(lt, mbp, op)
- DB_LOCKTAB *lt;
- DB_MSGBUF *mbp;
- DB_LOCKOBJ *op;
-{
- struct __db_lock *lp;
-
- SH_TAILQ_FOREACH(lp, &op->holders, links, __db_lock)
- __lock_printlock(lt, mbp, lp, 1);
- SH_TAILQ_FOREACH(lp, &op->waiters, links, __db_lock)
- __lock_printlock(lt, mbp, lp, 1);
- return (0);
-}
-
-/*
- * __lock_print_header --
- */
-static void
-__lock_print_header(env)
- ENV *env;
-{
- __db_msg(env, "%-8s %-10s%-4s %-7s %s",
- "Locker", "Mode",
- "Count", "Status", "----------------- Object ---------------");
-}
-
-/*
- * __lock_printlock --
- *
- * PUBLIC: void __lock_printlock
- * PUBLIC: __P((DB_LOCKTAB *, DB_MSGBUF *mbp, struct __db_lock *, int));
- */
-void
-__lock_printlock(lt, mbp, lp, ispgno)
- DB_LOCKTAB *lt;
- DB_MSGBUF *mbp;
- struct __db_lock *lp;
- int ispgno;
-{
- DB_LOCKOBJ *lockobj;
- DB_MSGBUF mb;
- ENV *env;
- db_pgno_t pgno;
- u_int32_t *fidp, type;
- u_int8_t *ptr;
- char *fname, *dname, *p, namebuf[26];
- const char *mode, *status;
-
- env = lt->env;
-
- if (mbp == NULL) {
- DB_MSGBUF_INIT(&mb);
- mbp = &mb;
- }
-
- switch (lp->mode) {
- case DB_LOCK_IREAD:
- mode = "IREAD";
- break;
- case DB_LOCK_IWR:
- mode = "IWR";
- break;
- case DB_LOCK_IWRITE:
- mode = "IWRITE";
- break;
- case DB_LOCK_NG:
- mode = "NG";
- break;
- case DB_LOCK_READ:
- mode = "READ";
- break;
- case DB_LOCK_READ_UNCOMMITTED:
- mode = "READ_UNCOMMITTED";
- break;
- case DB_LOCK_WRITE:
- mode = "WRITE";
- break;
- case DB_LOCK_WWRITE:
- mode = "WAS_WRITE";
- break;
- case DB_LOCK_WAIT:
- mode = "WAIT";
- break;
- default:
- mode = "UNKNOWN";
- break;
- }
- switch (lp->status) {
- case DB_LSTAT_ABORTED:
- status = "ABORT";
- break;
- case DB_LSTAT_EXPIRED:
- status = "EXPIRED";
- break;
- case DB_LSTAT_FREE:
- status = "FREE";
- break;
- case DB_LSTAT_HELD:
- status = "HELD";
- break;
- case DB_LSTAT_PENDING:
- status = "PENDING";
- break;
- case DB_LSTAT_WAITING:
- status = "WAIT";
- break;
- default:
- status = "UNKNOWN";
- break;
- }
- __db_msgadd(env, mbp, "%8lx %-10s %4lu %-7s ",
- (u_long)((DB_LOCKER *)R_ADDR(&lt->reginfo, lp->holder))->id,
- mode, (u_long)lp->refcount, status);
-
- lockobj = (DB_LOCKOBJ *)((u_int8_t *)lp + lp->obj);
- ptr = SH_DBT_PTR(&lockobj->lockobj);
- if (ispgno && lockobj->lockobj.size == sizeof(struct __db_ilock)) {
- /* Assume this is a DBT lock. */
- memcpy(&pgno, ptr, sizeof(db_pgno_t));
- fidp = (u_int32_t *)(ptr + sizeof(db_pgno_t));
- type = *(u_int32_t *)(ptr + sizeof(db_pgno_t) + DB_FILE_ID_LEN);
- (void)__dbreg_get_name(
- lt->env, (u_int8_t *)fidp, &fname, &dname);
- if (fname == NULL && dname == NULL)
- __db_msgadd(env, mbp, "(%lx %lx %lx %lx %lx) ",
- (u_long)fidp[0], (u_long)fidp[1], (u_long)fidp[2],
- (u_long)fidp[3], (u_long)fidp[4]);
- else {
- if (fname != NULL && dname != NULL) {
- (void)snprintf(namebuf, sizeof(namebuf),
- "%14s:%-10s", fname, dname);
- p = namebuf;
- } else if (fname != NULL)
- p = fname;
- else
- p = dname;
- __db_msgadd(env, mbp, "%-25s ", p);
- }
- __db_msgadd(env, mbp, "%-7s %7lu",
- type == DB_PAGE_LOCK ? "page" :
- type == DB_RECORD_LOCK ? "record" : "handle",
- (u_long)pgno);
- } else {
- __db_msgadd(env, mbp, "0x%lx ",
- (u_long)R_OFFSET(&lt->reginfo, lockobj));
- __db_prbytes(env, mbp, ptr, lockobj->lockobj.size);
- }
- DB_MSGBUF_FLUSH(env, mbp);
-}
-
-#else /* !HAVE_STATISTICS */
-
-int
-__lock_stat_pp(dbenv, statp, flags)
- DB_ENV *dbenv;
- DB_LOCK_STAT **statp;
- u_int32_t flags;
-{
- COMPQUIET(statp, NULL);
- COMPQUIET(flags, 0);
-
- return (__db_stat_not_built(dbenv->env));
-}
-
-int
-__lock_stat_print_pp(dbenv, flags)
- DB_ENV *dbenv;
- u_int32_t flags;
-{
- COMPQUIET(flags, 0);
-
- return (__db_stat_not_built(dbenv->env));
-}
-#endif
diff --git a/lock/lock_stub.c b/lock/lock_stub.c
deleted file mode 100644
index a016bc3..0000000
--- a/lock/lock_stub.c
+++ /dev/null
@@ -1,506 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-
-/*
- * If the library wasn't compiled with locking support, various routines
- * aren't available. Stub them here, returning an appropriate error.
- */
-static int __db_nolocking __P((ENV *));
-
-/*
- * __db_nolocking --
- * Error when a Berkeley DB build doesn't include the locking subsystem.
- */
-static int
-__db_nolocking(env)
- ENV *env;
-{
- __db_errx(env, "library build did not include support for locking");
- return (DB_OPNOTSUP);
-}
-
-int
-__lock_env_create(dbenv)
- DB_ENV *dbenv;
-{
- COMPQUIET(dbenv, 0);
- return (0);
-}
-
-void
-__lock_env_destroy(dbenv)
- DB_ENV *dbenv;
-{
- COMPQUIET(dbenv, 0);
-}
-
-int
-__lock_get_lk_conflicts(dbenv, lk_conflictsp, lk_modesp)
- DB_ENV *dbenv;
- const u_int8_t **lk_conflictsp;
- int *lk_modesp;
-{
- COMPQUIET(lk_conflictsp, NULL);
- COMPQUIET(lk_modesp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_lk_detect(dbenv, lk_detectp)
- DB_ENV *dbenv;
- u_int32_t *lk_detectp;
-{
- COMPQUIET(lk_detectp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_lk_max_lockers(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- COMPQUIET(lk_maxp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_lk_max_locks(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- COMPQUIET(lk_maxp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_lk_max_objects(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- COMPQUIET(lk_maxp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_lk_partitions(dbenv, lk_maxp)
- DB_ENV *dbenv;
- u_int32_t *lk_maxp;
-{
- COMPQUIET(lk_maxp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_env_timeout(dbenv, timeoutp, flag)
- DB_ENV *dbenv;
- db_timeout_t *timeoutp;
- u_int32_t flag;
-{
- COMPQUIET(timeoutp, NULL);
- COMPQUIET(flag, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_detect_pp(dbenv, flags, atype, abortp)
- DB_ENV *dbenv;
- u_int32_t flags, atype;
- int *abortp;
-{
- COMPQUIET(flags, 0);
- COMPQUIET(atype, 0);
- COMPQUIET(abortp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_get_pp(dbenv, locker, flags, obj, lock_mode, lock)
- DB_ENV *dbenv;
- u_int32_t locker, flags;
- DBT *obj;
- db_lockmode_t lock_mode;
- DB_LOCK *lock;
-{
- COMPQUIET(locker, 0);
- COMPQUIET(flags, 0);
- COMPQUIET(obj, NULL);
- COMPQUIET(lock_mode, 0);
- COMPQUIET(lock, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_id_pp(dbenv, idp)
- DB_ENV *dbenv;
- u_int32_t *idp;
-{
- COMPQUIET(idp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_id_free_pp(dbenv, id)
- DB_ENV *dbenv;
- u_int32_t id;
-{
- COMPQUIET(id, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_put_pp(dbenv, lock)
- DB_ENV *dbenv;
- DB_LOCK *lock;
-{
- COMPQUIET(lock, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_stat_pp(dbenv, statp, flags)
- DB_ENV *dbenv;
- DB_LOCK_STAT **statp;
- u_int32_t flags;
-{
- COMPQUIET(statp, NULL);
- COMPQUIET(flags, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_stat_print_pp(dbenv, flags)
- DB_ENV *dbenv;
- u_int32_t flags;
-{
- COMPQUIET(flags, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_vec_pp(dbenv, locker, flags, list, nlist, elistp)
- DB_ENV *dbenv;
- u_int32_t locker, flags;
- int nlist;
- DB_LOCKREQ *list, **elistp;
-{
- COMPQUIET(locker, 0);
- COMPQUIET(flags, 0);
- COMPQUIET(list, NULL);
- COMPQUIET(nlist, 0);
- COMPQUIET(elistp, NULL);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_conflicts(dbenv, lk_conflicts, lk_modes)
- DB_ENV *dbenv;
- u_int8_t *lk_conflicts;
- int lk_modes;
-{
- COMPQUIET(lk_conflicts, NULL);
- COMPQUIET(lk_modes, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_detect(dbenv, lk_detect)
- DB_ENV *dbenv;
- u_int32_t lk_detect;
-{
- COMPQUIET(lk_detect, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_max_locks(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- COMPQUIET(lk_max, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_max_lockers(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- COMPQUIET(lk_max, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_max_objects(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- COMPQUIET(lk_max, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_lk_partitions(dbenv, lk_max)
- DB_ENV *dbenv;
- u_int32_t lk_max;
-{
- COMPQUIET(lk_max, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_set_env_timeout(dbenv, timeout, flags)
- DB_ENV *dbenv;
- db_timeout_t timeout;
- u_int32_t flags;
-{
- COMPQUIET(timeout, 0);
- COMPQUIET(flags, 0);
- return (__db_nolocking(dbenv->env));
-}
-
-int
-__lock_open(env, create_ok)
- ENV *env;
- int create_ok;
-{
- COMPQUIET(create_ok, 0);
- return (__db_nolocking(env));
-}
-
-int
-__lock_id_free(env, sh_locker)
- ENV *env;
- DB_LOCKER *sh_locker;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(sh_locker, 0);
- return (0);
-}
-
-int
-__lock_env_refresh(env)
- ENV *env;
-{
- COMPQUIET(env, NULL);
- return (0);
-}
-
-int
-__lock_stat_print(env, flags)
- ENV *env;
- u_int32_t flags;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(flags, 0);
- return (0);
-}
-
-int
-__lock_put(env, lock)
- ENV *env;
- DB_LOCK *lock;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(lock, NULL);
- return (0);
-}
-
-int
-__lock_vec(env, sh_locker, flags, list, nlist, elistp)
- ENV *env;
- DB_LOCKER *sh_locker;
- u_int32_t flags;
- int nlist;
- DB_LOCKREQ *list, **elistp;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(sh_locker, 0);
- COMPQUIET(flags, 0);
- COMPQUIET(list, NULL);
- COMPQUIET(nlist, 0);
- COMPQUIET(elistp, NULL);
- return (0);
-}
-
-int
-__lock_get(env, locker, flags, obj, lock_mode, lock)
- ENV *env;
- DB_LOCKER *locker;
- u_int32_t flags;
- const DBT *obj;
- db_lockmode_t lock_mode;
- DB_LOCK *lock;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(locker, NULL);
- COMPQUIET(flags, 0);
- COMPQUIET(obj, NULL);
- COMPQUIET(lock_mode, 0);
- COMPQUIET(lock, NULL);
- return (0);
-}
-
-int
-__lock_id(env, idp, lkp)
- ENV *env;
- u_int32_t *idp;
- DB_LOCKER **lkp;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(idp, NULL);
- COMPQUIET(lkp, NULL);
- return (0);
-}
-
-int
-__lock_inherit_timeout(env, parent, locker)
- ENV *env;
- DB_LOCKER *parent, *locker;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(parent, NULL);
- COMPQUIET(locker, NULL);
- return (0);
-}
-
-int
-__lock_set_timeout(env, locker, timeout, op)
- ENV *env;
- DB_LOCKER *locker;
- db_timeout_t timeout;
- u_int32_t op;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(locker, NULL);
- COMPQUIET(timeout, 0);
- COMPQUIET(op, 0);
- return (0);
-}
-
-int
-__lock_addfamilylocker(env, pid, id)
- ENV *env;
- u_int32_t pid, id;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(pid, 0);
- COMPQUIET(id, 0);
- return (0);
-}
-
-int
-__lock_freefamilylocker(lt, sh_locker)
- DB_LOCKTAB *lt;
- DB_LOCKER *sh_locker;
-{
- COMPQUIET(lt, NULL);
- COMPQUIET(sh_locker, NULL);
- return (0);
-}
-
-int
-__lock_downgrade(env, lock, new_mode, flags)
- ENV *env;
- DB_LOCK *lock;
- db_lockmode_t new_mode;
- u_int32_t flags;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(lock, NULL);
- COMPQUIET(new_mode, 0);
- COMPQUIET(flags, 0);
- return (0);
-}
-
-int
-__lock_locker_is_parent(env, locker, child, retp)
- ENV *env;
- DB_LOCKER *locker;
- DB_LOCKER *child;
- int *retp;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(locker, NULL);
- COMPQUIET(child, NULL);
-
- *retp = 1;
- return (0);
-}
-
-void
-__lock_set_thread_id(lref, pid, tid)
- void *lref;
- pid_t pid;
- db_threadid_t tid;
-{
- COMPQUIET(lref, NULL);
- COMPQUIET(pid, 0);
- COMPQUIET(tid, 0);
-}
-
-int
-__lock_failchk(env)
- ENV *env;
-{
- COMPQUIET(env, NULL);
- return (0);
-}
-
-int
-__lock_get_list(env, locker, flags, lock_mode, list)
- ENV *env;
- DB_LOCKER *locker;
- u_int32_t flags;
- db_lockmode_t lock_mode;
- DBT *list;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(locker, NULL);
- COMPQUIET(flags, 0);
- COMPQUIET(lock_mode, 0);
- COMPQUIET(list, NULL);
- return (0);
-}
-
-void
-__lock_list_print(env, list)
- ENV *env;
- DBT *list;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(list, NULL);
-}
-
-int
-__lock_getlocker(lt, locker, create, retp)
- DB_LOCKTAB *lt;
- u_int32_t locker;
- int create;
- DB_LOCKER **retp;
-{
- COMPQUIET(locker, 0);
- COMPQUIET(create, 0);
- COMPQUIET(retp, NULL);
- return (__db_nolocking(lt->env));
-}
-
-int
-__lock_id_set(env, cur_id, max_id)
- ENV *env;
- u_int32_t cur_id, max_id;
-{
- COMPQUIET(env, NULL);
- COMPQUIET(cur_id, 0);
- COMPQUIET(max_id, 0);
- return (0);
-}
diff --git a/lock/lock_timer.c b/lock/lock_timer.c
deleted file mode 100644
index 4df6c1a..0000000
--- a/lock/lock_timer.c
+++ /dev/null
@@ -1,174 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/lock.h"
-
-/*
- * __lock_set_timeout --
- * Set timeout values in shared memory.
- *
- * This is called from the transaction system. We either set the time that
- * this transaction expires or the amount of time a lock for this transaction
- * is permitted to wait.
- *
- * PUBLIC: int __lock_set_timeout __P((ENV *,
- * PUBLIC: DB_LOCKER *, db_timeout_t, u_int32_t));
- */
-int
-__lock_set_timeout(env, locker, timeout, op)
- ENV *env;
- DB_LOCKER *locker;
- db_timeout_t timeout;
- u_int32_t op;
-{
- int ret;
-
- if (locker == NULL)
- return (0);
- LOCK_REGION_LOCK(env);
- ret = __lock_set_timeout_internal(env, locker, timeout, op);
- LOCK_REGION_UNLOCK(env);
- return (ret);
-}
-
-/*
- * __lock_set_timeout_internal
- * -- set timeout values in shared memory.
- *
- * This is the internal version called from the lock system. We either set
- * the time that this transaction expires or the amount of time that a lock
- * for this transaction is permitted to wait.
- *
- * PUBLIC: int __lock_set_timeout_internal
- * PUBLIC: __P((ENV *, DB_LOCKER *, db_timeout_t, u_int32_t));
- */
-int
-__lock_set_timeout_internal(env, sh_locker, timeout, op)
- ENV *env;
- DB_LOCKER *sh_locker;
- db_timeout_t timeout;
- u_int32_t op;
-{
- DB_LOCKREGION *region;
- region = env->lk_handle->reginfo.primary;
-
- if (op == DB_SET_TXN_TIMEOUT) {
- if (timeout == 0)
- timespecclear(&sh_locker->tx_expire);
- else
- __lock_expires(env, &sh_locker->tx_expire, timeout);
- } else if (op == DB_SET_LOCK_TIMEOUT) {
- sh_locker->lk_timeout = timeout;
- F_SET(sh_locker, DB_LOCKER_TIMEOUT);
- } else if (op == DB_SET_TXN_NOW) {
- timespecclear(&sh_locker->tx_expire);
- __lock_expires(env, &sh_locker->tx_expire, 0);
- sh_locker->lk_expire = sh_locker->tx_expire;
- if (!timespecisset(&region->next_timeout) ||
- timespeccmp(
- &region->next_timeout, &sh_locker->lk_expire, >))
- region->next_timeout = sh_locker->lk_expire;
- } else
- return (EINVAL);
-
- return (0);
-}
-
-/*
- * __lock_inherit_timeout
- * -- inherit timeout values from parent locker.
- * This is called from the transaction system. This will
- * return EINVAL if the parent does not exist or did not
- * have a current txn timeout set.
- *
- * PUBLIC: int __lock_inherit_timeout __P((ENV *, DB_LOCKER *, DB_LOCKER *));
- */
-int
-__lock_inherit_timeout(env, parent, locker)
- ENV *env;
- DB_LOCKER *parent, *locker;
-{
- int ret;
-
- ret = 0;
- LOCK_REGION_LOCK(env);
-
- /*
- * If the parent is not there yet, thats ok. If it
- * does not have any timouts set, then avoid creating
- * the child locker at this point.
- */
- if (parent == NULL ||
- (timespecisset(&parent->tx_expire) &&
- !F_ISSET(parent, DB_LOCKER_TIMEOUT))) {
- ret = EINVAL;
- goto err;
- }
-
- locker->tx_expire = parent->tx_expire;
-
- if (F_ISSET(parent, DB_LOCKER_TIMEOUT)) {
- locker->lk_timeout = parent->lk_timeout;
- F_SET(locker, DB_LOCKER_TIMEOUT);
- if (!timespecisset(&parent->tx_expire))
- ret = EINVAL;
- }
-
-err: LOCK_REGION_UNLOCK(env);
- return (ret);
-}
-
-/*
- * __lock_expires --
- * Set the expire time given the time to live.
- *
- * PUBLIC: void __lock_expires __P((ENV *, db_timespec *, db_timeout_t));
- */
-void
-__lock_expires(env, timespecp, timeout)
- ENV *env;
- db_timespec *timespecp;
- db_timeout_t timeout;
-{
- db_timespec v;
-
- /*
- * If timespecp is set then it contains "now". This avoids repeated
- * system calls to get the time.
- */
- if (!timespecisset(timespecp))
- __os_gettime(env, timespecp, 1);
-
- /* Convert the microsecond timeout argument to a timespec. */
- DB_TIMEOUT_TO_TIMESPEC(timeout, &v);
-
- /* Add the timeout to "now". */
- timespecadd(timespecp, &v);
-}
-
-/*
- * __lock_expired -- determine if a lock has expired.
- *
- * PUBLIC: int __lock_expired __P((ENV *, db_timespec *, db_timespec *));
- */
-int
-__lock_expired(env, now, timespecp)
- ENV *env;
- db_timespec *now, *timespecp;
-{
- if (!timespecisset(timespecp))
- return (0);
-
- if (!timespecisset(now))
- __os_gettime(env, now, 1);
-
- return (timespeccmp(now, timespecp, >=));
-}
diff --git a/lock/lock_util.c b/lock/lock_util.c
deleted file mode 100644
index 2b69a8d..0000000
--- a/lock/lock_util.c
+++ /dev/null
@@ -1,97 +0,0 @@
-/*-
- * See the file LICENSE for redistribution information.
- *
- * Copyright (c) 1996-2009 Oracle. All rights reserved.
- *
- * $Id$
- */
-
-#include "db_config.h"
-
-#include "db_int.h"
-#include "dbinc/db_page.h"
-#include "dbinc/hash.h"
-#include "dbinc/lock.h"
-
-/*
- * The next two functions are the hash functions used to store objects in the
- * lock hash tables. They are hashing the same items, but one (__lock_ohash)
- * takes a DBT (used for hashing a parameter passed from the user) and the
- * other (__lock_lhash) takes a DB_LOCKOBJ (used for hashing something that is
- * already in the lock manager). In both cases, we have a special check to
- * fast path the case where we think we are doing a hash on a DB page/fileid
- * pair. If the size is right, then we do the fast hash.
- *
- * We know that DB uses DB_LOCK_ILOCK types for its lock objects. The first
- * four bytes are the 4-byte page number and the next DB_FILE_ID_LEN bytes
- * are a unique file id, where the first 4 bytes on UNIX systems are the file
- * inode number, and the first 4 bytes on Windows systems are the FileIndexLow
- * bytes. This is followed by a random number. The inode values tend
- * to increment fairly slowly and are not good for hashing. So, we use
- * the XOR of the page number and the four bytes of the file id randome
- * number to produce a 32-bit hash value.
- *
- * We have no particular reason to believe that this algorithm will produce
- * a good hash, but we want a fast hash more than we want a good one, when
- * we're coming through this code path.
- */
-#define FAST_HASH(P) { \
- u_int32_t __h; \
- u_int8_t *__cp, *__hp; \
- __hp = (u_int8_t *)&__h; \
- __cp = (u_int8_t *)(P); \
- __hp[0] = __cp[0] ^ __cp[12]; \
- __hp[1] = __cp[1] ^ __cp[13]; \
- __hp[2] = __cp[2] ^ __cp[14]; \
- __hp[3] = __cp[3] ^ __cp[15]; \
- return (__h); \
-}
-
-/*
- * __lock_ohash --
- *
- * PUBLIC: u_int32_t __lock_ohash __P((const DBT *));
- */
-u_int32_t
-__lock_ohash(dbt)
- const DBT *dbt;
-{
- if (dbt->size == sizeof(DB_LOCK_ILOCK))
- FAST_HASH(dbt->data);
-
- return (__ham_func5(NULL, dbt->data, dbt->size));
-}
-
-/*
- * __lock_lhash --
- *
- * PUBLIC: u_int32_t __lock_lhash __P((DB_LOCKOBJ *));
- */
-u_int32_t
-__lock_lhash(lock_obj)
- DB_LOCKOBJ *lock_obj;
-{
- void *obj_data;
-
- obj_data = SH_DBT_PTR(&lock_obj->lockobj);
-
- if (lock_obj->lockobj.size == sizeof(DB_LOCK_ILOCK))
- FAST_HASH(obj_data);
-
- return (__ham_func5(NULL, obj_data, lock_obj->lockobj.size));
-}
-
-/*
- * __lock_nomem --
- * Report a lack of some resource.
- *
- * PUBLIC: int __lock_nomem __P((ENV *, const char *));
- */
-int
-__lock_nomem(env, res)
- ENV *env;
- const char *res;
-{
- __db_errx(env, "Lock table is out of available %s", res);
- return (ENOMEM);
-}