Docs‎ > ‎

Performance

Automation has value only in so far as there are no compromises in architecture (to integrate with existing systems), extensibility (to address elements not automated), and performance.  This page provides details on how Espresso Logic delivers enterprise-class performance.  

Espresso Logic is designed to deliver on all the best-practice patterns below.  Note that relevant optimizations are re-visited on each logic change.  So, in the same way a DBMS optimizer revises retrieval plans to maintain high performance, Espresso Logic performance remains at a high level over maintenance iterations.  

Minimizing Client Latency

Modern applications may often be required to support clients connected through high-latency cloud-based connections.  The following are designed to minimize client connection latency.

Rich Resource Objects

When retrieving objects for presentation, you can define Resources that include multiple types, such as a Customer with their Payments, Orders, and Items.  These are delivered in a single response message, so that only single trip is required.

Note that this requirement is not fully satisfied by Views.  Views are often not updateable, and joins result in cartesian products when joining multiple child tables for the same parent.  In our example, a Customer with 5 Payments and 10 Orders would return 50 rows - quite unreasonable for the client to decode and present.


Leverage Relational Database Query Power

Moreover, each Resource/SubResource can be a full relational query.  As such, these can be sent in single trip to the REST (and then database) server.

Contrast this to less powerful retrieval engines, where common requirements such as sums and counts must be computed by the client.  This drives the number of queries up n-fold, which can have a dramatic effect on performance.

Pagination

Large result sets can have a devastating effect on the client, network, server, and database.  Pagination is provided to truncate large results, with provisions to retrieve remaining results (e.g., when/if the End User scrolls).

This can be a complex problem.  Consider a Resource of Customer, Orders and Items.  If there are many Orders, pagination must occur at this level, with provision for including the Line Items on subsequent pagination requests.

Batched Updates

Network considerations apply to update as well as retrieval.  Consider many rows retrieved into a client, followed by an update.  

APIs are designed to enable clients to send only the changes, instead of the entire set of objects.   They are further designed to enable clients to send multiple row types (e.g., an Order and Items) in a single message.

This results in single, small update message.

Single Message update/refresh

Business logic consists not only of validations, but derivations.  These derivations can often involve rows visible but not directly updated by the client.  For example, saving an order might update the customer's balance.  It's critical that the updated balance be reflected on the screen.

Clients typically solve this problem by re-retrieving the data.  This is unfortunate in a number of ways.  First, it's an extra client/server trip over a high latency network.  And sometimes, it's difficult to program, for example when the order's key is system assigned - the computed key may not be known to the client.  In such cases, the client may need to re-retrieve the entire rich result set.

Espresso Logic solves this by returning the refresh information in the update response.  So, with a single message, the client can communicate a set of updates, and use the response to show the computations on related data.

Server-enforced integrity minimizes client traffic

An infamous anti-pattern is to place business logic in the client.  This does not ensure integrity (particularly when the clients are partners), and causes multiple client/server trips.  For example, inserting a new Line Item may require business logic that updates the Order, the Customer, and the Product.  If these are issued from the client, the result is 4 client/server trips when only 1 should be required.


Minimizing DBMS Load

The Logic Engine is designed to minimize the cost and number of SQL operations as described in the sub-sections below.


Minimizing Server/DB Latency

You can define the desired region for your Espresso Logic.  This minimizes latency for SQL operations issued by the Espresso Logic server.


Update Logic Pruning eliminates SQLs

The Logic Engine is designed to prune (eliminate) SQL operations where possible. For example:

  1. Parent Reference Pruning: SQLs to access parent rows are averted if other other (local) expression values are unchanged.  For example, if

               attribute-X is derived as attribute-Y * parent.attribute-1,  

    the retrieval for parent is eliminated if attribute-Y is not altered.

  2. Cascade Pruning: If parent attributes are altered that are referenced by child logic, the system cascades the change to each child row.  If the parent attribute is not altered, cascade overhead is pruned.  In the same example above, the value of parent.attribute-1 is cascaded iff it is altered

Update Adjustment Logic eliminates multi-level aggregate SQLs

The Logic Engine is also designed to minimize the cost of SQL operations. For example:

  1. Adjustment: for persisted sum/count aggregates, the system does not issue select sum aggregate queries.  Instead, it makes a single-row update to adjust the parent, based on the old/new values in the child.   Aggregate queries can be particularly costly when they cascade (e.g., the Customer's balance is the sum of the order Amount, which is the sum of each Order's Lineitem Amounts).

  2. Adjustment Pruning: adjustment only occurs when the summed attribute changes, the foreign key changes, or the qualification condition changes.  If none of these occur, parent access/chaining is averted.

Transaction Caching

Consider inserting an Order with multiple Line Items.  Per the logic shown at right, the system needs to update ("adjust") the Order total and Customer balance for each Line Item.  

It is imperative the system not retrieve these objects multiple times.  Not only would this incur substantial overhead, it would be difficult to ensure consistent results.

The system therefore maintains a cache for each transaction.  All reads and writes go through the cache, and are flushed at the end of the transaction.  This eliminates many SQLs, and ensures a consistent view of updated data.

Locking

Locking is a key performance factor.  The sub-sections below address this for delivering results to the client, and for processing update transactions.

GET: Optimistic Locking

A well-known pattern is optimistic locking.  Acquiring locks while viewing data can drastically reduce concurrency.  Accordingly, locks are not acquired while processing GET requests.  

The system ensures that updated data has not been altered since initial retrieval, as described below.

PUT, POST and DELETE: leverage DBMS Locking and Transactions

Update requests are locked using DBMS Locking services.   There are several cases to consider, as described below.
  • Client Updates
In accordance with Optimistic Locking, the system ensures that client-submitted rows have not been altered since retrieved.   This is done by write-locking the row using a time stamp, or (if one is not defined) by a hash code of all retrieved data.  Observe this strategy means that a time stamp is not required.  

This process is done as the first part of the transaction, so optimistic locking issues are detected before SQL overhead is incurred.
  • Rule Chaining
All rows processed in a transaction as a consequence of logic execution (e.g., adjusting parent sums or counts) are read locked.

Write locks are acquired at the end of the transaction - the "flush" phase.  Note many other transactions' read locks could have been acquired and released between the time of the initial read lock and the flush.
  • Referential Integrity
Such data is read in accordance with DBMS policy.


Server Optimizations

The Logic Server itself is designed to promote good performance, as described below.

Load Balanced Dynamic Clustering

Cloud-based Espresso Logic implementations utilize the cloud platform services to scale as many server instances as required to meet the load.  Each server is stateless, and incoming requests are load balanced over the set of running servers.

Meta Data Caching (logic, security)

The logic and security information you specify to the Logic Designer is read into cache, so that disk reads are not required to process each request.  This cache is persisted over transactions until you alter your logic.

Direct Execution (no code generation)

Reactive Logic is many-fold more expressive that procedural code.  Compiling logic into JavaScript would therefore represent a significant performance issue.  Reactive Logic is therefore executed directly, and not compiled into JavaScript.

Measurements

Transparent information on system performance is an important requirement. 


Logging

The system provides logs of SQL and Rule execution as described here.

Statistics

You can also obtain aggregate information as provided by the Metrics page.

Subpages (1): Metrics