|| Espresso simplifies and accelerates creating RESTful servers, by providing declarative definition of many services typically found in an App Server:
- API: connecting to your database creates the Default API: Get, Post, Put and Delete for each Table, including Get / Post access to each View and Stored Procedure. Point-and-click to create nested document resources (end points).
- Integration: resources can combine data from multiple sources (SQL, Mongo and RESTful), including updates between them.
- Security: enforces end-point access, and row/column security.
Specify your settings in the Browser-based Designer. Activation is instantaneous - no code gen, no deployment.
Use the Live Browser, created from the schema, to test your API, and for back-office database maintenance.
Flexible Deployment - both the RESTful service and database can be in the cloud, on-premise, or war file.
Use your own existing database, or the pre-supplied empty database in the Your Database project.
Espresso System Components
The Espresso service is provided in a number of alternatives:
- Cloud-based service - runs on AWS or Azure. No install.
- Appliance - VMware image, or Docker. Install the VM, and run.
- WAR - for deployment into Web / App Servers, so it can run with and utilize your existing systems, with a familiar deployment model.
As shown below, Espresso is a Web App that consists of these main components:
- The Espresso Server ("backend service") stores these settings in the Admin database, and enforces them in the course of processing REST retrieval / update requests. Typical clients are Web / Mobile Apps, or other systems.
- Admin Database - The Admin MySQL database stores your API definitions, logic, security settings, etc. It is transparent for Cloud/Appliance users, and must be configured for WAR users. The Admin database is accessed via the Espresso's REST API.
- Identity Management - In typical deployments, Espresso calls out to your security system (AD, LDAP, OAuth) for Authentication
In many Web Apps, logic is "buried in the buttons", and therefore cannot be shared by mobile apps, called as a service by other systems / partners, etc.
In Espresso, logic for data access and security / integrity enforcement is automatically partitioned to the Espresso Server, so it can be properly shared. Row-level security is delegated to the DBMS for optimization.
The following screen shot of the Espresso Designer illustrates the basic usage overview.
Start by Creating an API (also called a Project
Declarative API, Integration, Logic and Security
Customize your API, integrate additional Data Sources, and specify your Logic and Security policy
, including your own
. A key element is the Java Script Object Model
, created by Espresso when you connect to your database.
| Type|| Description|| Applies to|
| Resource Row Event|| Invoked as each row is returned, e.g., add attributes or filter the row|| Resources, based on the Object Model|
| HTTP Handler|| Define new RESTful endpoint Resources with potentially no relationship to the data model|| RESTful server|
| Request Event|| Invoked for masked request, e.g., to log requests or recompute responses|| RESTful Server|
Transparency is provided with debugging
Multiple Developers can create APIs and logic concurrently. You can import / export your project
into a JSON file, which you can manage with existing tools for diffs, source control etc.
Documentation services include:
- API documentation, via Swagger
- System Documentation, via URLs you can associate with your project and view in the Designer
- Logic Documentation, via Topics that trace requirements to the rules that implement them
Espresso is designed to fit into an Enterprise architecture as shown on the right. Typical integrations are discussed below.
Virtually all languages facilitate HTTP / REST APIs. These integrate naturally with Espresso RESTful APIs.
Some languages are built around objects (Java POJOs, .NET POCOs, etc) - you can create such objects using Swagger SDKs.
Cloud / on-premise SQL databases are accessed via JDBC. Their tables, views and stored procedures are valid end points, per security settings.
Updates are of course subject to database logic such as triggers.
is provided by default for development, but production systems typically delegate authentication to utilize existing corporate security systems such as LDAP, AD, or OAuth.
at the row/column level is injected by Espresso into SQL that is sent to the database, where it can be properly optimized.
Since Espresso Server is a standard REST API, you can insert API Management Systems (they operate as Gateways), for monitoring, denial of service attack protection, etc.
MBaaS / PaaS Services
Espresso Logic can be an important component to your Mobile BackEnd as a Service (MBaaS) , providing transaction processing automation to complement technologies such as push or security / social integration.
The diagram further illustrates that Espresso Logic is complementary to other automation services, such as Rule Engines for Decisions and Work Flow.
Enterprise Service Bus
Service Orchestration (iPaaS
) products (also known as Integration Platform as a Service, such as MuleSoft), can assist in building an Enterprise Service Bus that integrates a number of existing
underlying REST services to provide Enterprise Integration. Espresso Logic plays a complementary role by enabling you to build (and integrate) services that don't already exist.
These run alongside existing manually coded database services, as well as non-transactional services that deal with more content-oriented information.
Live API consists of Resource End Points which are defaulted from the schema, along with Custom Resources you define explicitly. You access these End Points via a RESTful API.
If desired, you can expose all of your Base Tables, View Tables and Stored Procedures as Resources (Resource End Points). This enables you to begin browsing your API instantly, and start App development. You can turn this off later in the project to protect access to this data.
More formally, you can use the Logic Designer to define explicit REST Resources for retrieving and updating data. Definition is point and click - select the tables and columns. You can project and alias columns, and retrieve related data with full SQL automation.
Explicitly defined Custom Resources serve several important purposes:
- Database Abstraction Layer: REST Resources are loosely akin to database views, where you can define projections and filters. You can also define aliases for all your tables and columns; this protects your application from schema changes in the database.
- Minimize Network Latency: Beyond view-like functionality, you can define sub-resources, typically for related parent/child data. For example, you might define a Customer that includes its Purchase Orders, Line Items and Product Detail. When you issue REST retrievals, the returned JSON includes all of this data in one request/response.
- Convenient Programming Model: JSON results are returned as a document model with nesting for Sub Resources - this is often preferable for client applications.
- Integration: SubResources can come from different databases, including non SQL sources such as REST, Mongo, and ERP systems.
In any case, Resources are available instantly. There is no restart, code generation, deploy, configuration, etc.
You can access your Resource End Points with a RESTful API. This makes your data available from virtually any client, in particularly mobile clients and cloud-based access.
You can issue HTTP-based retrieval requests against these resources using URLs like this:
So, you can retrieve a single object (in this case a Customer with key = Acme) like this, in which case a single JSON object is returned:
Your program can also issue retrieval
operations against these Resources, such as
The filter controls what "cust" resource rows are returned in a JSON array, sorted per the order clause. You can omit the filter, in which case all the customers are returned in an array (see pagination, below!).
Each customer is returned as a JSON string, including its nested objects (e.g., payments, purchase orders, line items, and product information).
Additional retrieval services are provided as described below.
The following critical services are provided for Enterprise class use.
Coalesced Retrieval Strategy
Retrieval is processed a level at a time, retrieving <pagesize> rows per request. SubResource rows are retrieved in the same request, with optimizations for multi-database resources: all of the SubResource rows are retrieved in 1 query.
For example, imagine we have a
pagesize of 10, retrieving customers and their orders. On the first request:
- The first 10 customers are retrieved
- The system extracts the 10 customer keys, and uses these to perform 1 query for orders with a
where clause for all 10 customers ("cust-1 or cust-2, ...'). The actual query also includes, of course, any relevant security filters.
- The system distributes the orders to the proper customer in the preparation of the JSON response.
Observe this avoids 10 orders queries, so performs well in multi-database configurations where a customer-join-order is not feasible.
Each project has a default 'Chunk Size' to control this behavior. Chunk size can be set to one to emit the simple SQL for debugging and testing and can even be changed on a per request basis.
specifications are defined for base tables, and are automatically applied to all Resources defined over that table. You can, of course, specify the security properties after the Resource(s) are defined.
Large result sets can cripple performance, both on the client and the server. Espresso Logic therefore supplies a URI which clients can use to retrieve more data.
is supported at any Sub Resource level. So, a query of Customers and Orders can provide pagination both for many customers, and many orders for each of the customers.
In the context of Espresso Logic, business logic refers to the transactional logic that should be applied when committing a transaction. Business Logic consists of multi-table computations, validations, and actions such as auditing, cloning, and sending mail. This is complementary to other forms of business logic such as process (workflow) logic, application integration, and decision logic.
Logic is declarative
, providing active enforcement, re-use over transactions, optimizations, and dependency management / ordering. While simple, logic is remarkably powerful, it is many-fold more concise
than procedural code.
You can define business logic using a combination of events and declarative logic, as described below.
, these provide accessors for attributes and related data, and automated persistence. Old/New versions of rows are supplied to your Events and Logic, with automatic Resource/Object Mapping
. More information is provided under Resource Definition
handlers. The Logic Engine invokes these on every update, providing key contextual information such as the
. Events are a key architectural element, providing:
- re-use: table events are re-used over all Resources built on that table. When Resource updates are received, they are de-aliased onto the underlying Base Table objects.
- encapsulation: you do not need to explicitly invoke your event logic - it is automatically invoked by the system as updated requests are processed.
Reactive Programming Logic
Your logic specifications are defined for base tables, and are automatically applied to all Resources defined over that table. You can of course specify the logic after the Resource(s) are defined.
is a sophisticated process that automates complex multi-table dependencies with automatic re-use
across Use Cases, while maintaining enterprise class performance
through SQL reduction / elimination. Logic plans reflect the latest deployed logic, so that compliance and performance are maintained while providing business agility.
Derivations and Validations
Logic declared in the Logic Designer is bound to your database tables, and enforced on all updates against those tables. Such injection provides automatic re-use.
Logic is specified as a series of Constraint Declarations. Constraints are expressions that the runtime system will guarantee to be true for a transaction to succeed. There are two basic types:
- Derivations: Reactive Programming Expressions define the value of a column in a table. They range from simple formulas (product_price * qty_ordered) to multi-table (sum example). The key idea is that the system will watch for changes in referenced data; if detected, it will recompute the derivation in an optimal manner. This can become quite complex, since derivations can chain as in the example shown here.
- Validations: these are expressions that must be true for a transaction to commit (else an exception is raised). Validations are multi-field, and can operate on derivation results (as in this credit_limit example).
This technology confers significant benefits:
- Agility: Automation for dependency management and SQL access means that the 5 rules above express the equivalent of 500 lines of procedural code.
- Active Enforcement: Logic is not called explicitly. Instead, it is automatically injected into all transactions ("active enforcement"), making re-use automatic and thereby eliminating an entire class of bugs.
- Automatic Ordering: Logic is automatically ordered based on dependencies. This automates an entire class of maintenance problems, since logic can be freely changed (or inserted or deleted). The system will determine the new dependencies and compute a new execution order, automatically. Cycles are of course detected.
- Transparency: Business users can read the logic, understand system behavior, and assist in spotting errors or omissions.
The logic declared above, conceived to address Place Order, is automatically re-used over all related Use Cases. Such active enforcement is automatic (it does not rely on explicit programming calls), and therefore ensures compliance.
Let's consider one example: changing an Items
qtyOrdered. The diagram at right shows how the system recomputes the price, amount, order amount and customer balance.
While a small example, such automated multi-table dependencies, automatically re-used over multiple Use Cases, addresses the key challenge for transactional business logic.
A key benefit of logic is automated multi-table derivations. For example, saving an order might update the customer's balance. It is further possible (but not required) that this related data might be on the user's screen. Good user interface design dictates these effects be shown to the end user.
Espresso Logic returns JSON refresh information for all updated data per logic execution, so that clients can merge these updates into the screen. This can improve performance since the client does not need to re-retrieve data to show derivation results.
Good performance dictates that data not be locked on retrieval. Concurrency is typically addressed by optimistic locking.
Espresso Logic automates optimistic locking
for all transactions. This can be based on a configured time-stamp column, or, if there is none, a hash of all resource attributes.
Transaction bracketing is automatic. PUT/POST/DELETE requests (which may be comprised of multiple roles) are automatically bundled into a transaction, including all logic-triggered updates.
Generated Key Handling
Typical applications logic often includes significant logic to handle DBMS-generated keys. For example, you want to add an Order and Line Items in 1 transaction. The Order# is generated into the database; how is this key placed into each Line Item?
The Live Browser
It provides the following key services:
- Multi-table application interfaces derived from your parent/child relationships, including
- Master/Detail, to show child objects for a parent (e.g, products for category)
- Drill Down Navigation, to show related data (e.g., Orders for Product, or SalesRep for Order)
- Automatic Joins (e.g., show Product Name, not Product Number)
- Filtering on multiple fields, with paginated scrolling (subject to row/column security)
- Row Sharing, so you can send the current Form Row to colleagues
- Update Services, including
- Updatable grids
- Lookups, to associate a child to a parent (e.g., Company Name for Product)
- Enforcement of Live Logic
- Authoring, so you can control which attributes are displayed, grouped and formatted, and skinning
The Live Browser uses the same REST API as any other Espresso Logic application: everything it does can, by definition, be done by using the REST API.
Per REST requirements and industry Best Practice, all processing is stateless. Espresso thus naturally scales horizontally.
Enterprise class performance is addressed by a number of services described here
, addressing performance from reduced network latency, through RESTful server operation, to DBMS optimization.