http://www.perlmonks.org?node_id=955346

clinton has asked for the wisdom of the Perl Monks concerning the following question:

In our homegrown ORM we have an in-memory cache, which enables us to ensure that only one instance of any object is live in memory at any one time.

In other words:

$one = MyObject->get(123); $two = MyObject->get(123); refaddr($one) == refaddr($two)

I find this setup useful because:

When I do a search against the DB, it returns a list of objects, which I can then retrieve (in bulk) from:

-> the in memory cache -> memcached -> the DB

No DB-based object contains another DB-based object, to avoid circular references. Instead, it just contains the ID of the object. Retrieving the actual object is cheap (assuming it has already been loaded) because we can just request the single instance of that object from the in-memory cache.

The in-memory cache is cleared at the end of each web-request.

The above is pretty similar to how KiokuDB works.

THE FUTURE AND BEYOND:

I’m currently working on an “ORM” that uses ElasticSearch as its backend. (“ORM” is in quotes because ES functions as a Lucene-powered document store, rather than being a relational DB).

I’d like to replicate the current functionality, because I think it has merits, but there is a complication: Time doesn’t necessarily flow forwards

To explain:

What this means is that I could:

GET doc 123 -> returns version 6 SEARCH for doc 123 -> returns version 5

This would normally never happen in a traditional DB, because updates are atomic, and indexes are updated as the document is indexed. But it could happen in a master-slave setup where there is replication lag.

Also, I’m guessing this is a common scenario in NoSQL datastores.

Note:

This is an issue just for the current request, not for writes to ES. Every doc in ES has a _version number, and if you try to update the wrong version, it will throw a Conflict error, in which case you can:

So where might this be a problem?

Scenarios:

$a = get -> version 1 $b = search -> version 1

This one is easy. $b can just reuse the object in $a.

$a = get -> version 1 $b = search -> version 1 $a->change() $a->save() -> version 2

Potentially, the object no longer matches the search that you did, so you may be displaying incorrect results. (eg you search for name == ‘Joe’, then change name to ‘Bob’). But this looks like a reasonable process to me.

$a = get -> version 2 $b = search -> version 1

Our search has returned an older version of the object. The newer version might or not match the search parameters. Do we display the old results? or the new results?

$a = get -> version 1 $a->change() $b = search -> version 1

We have a changed (but as yet unsaved) object in the cache. Should $b contain the changed object, or the pristine object?

$a = get -> version 1 $a->change() $b = search -> version 2

We have an old (and changed) version in $a. We know that a newer version already exists in the DB, so we’ll get a conflict error if we try to save $a. What do we do?

Proposal:

I think my logic will look something like this:

my ($class,$id,$version,$data) = @_; if (my $cached = $cache->{$id}) { return $cached if $version <= $cached->{version}; return $cache->re_new($data); unless $cached->has_changed; } return $cache->{$id} = $class->new($data);

In other words, all instances of the object are always updated to the latest version, EXCEPT if the current instance has been edited and not yet saved. (Saving will throw a conflict error later on anyway).

Also, if you wanted to “detach” an object, then you could clone it and update it independently.

The only issue is that search results may contain a newer object which no longer matches the search parameters. Personally, I’m probably happy to live with this, but I probably need (a) a default setting and (b) a dynamic flag which the user can use to control this behaviour.

Thanks for getting to the bottom of this.

What do you think? See any obvious (or not-so-obvious) flaws?