Appeon 6.6 Prev Page Prev Page
Appeon Performance Tuning Guide
Appeon Performance
Expected performance level
Automatic performance boosting
Impact of the Internet and slow networks
Impact of “heavy” client-side logic
Impact of large data transmission
Performance-Related Settings
Appeon Developer performance settings
Appeon Enterprise Manager performance settings
Timeout settings
DataWindow data caching
Multi-thread download settings
Custom Libraries download settings
Log file settings
Internet Explorer performance settings
Web and application server performance settings
SAP Sybase EAServer
JVM startup option
Configuring data sources
HTTP properties
Microsoft IIS server
Recommendations for avoiding common errors on IIS
Advanced thread settings
Database performance settings
Recommended database driver
Recommended database setting
Identifying Performance Bottlenecks
Heavy window report
Appeon Performance Analyzer
Getting Started
Enabling Appeon Performance Analyzer
Starting Appeon Performance Analyzer
Getting to know Appeon Performance Analyzer
Removing Appeon Performance Analyzer
Working with Appeon Performance Analyzer
System Configuration
Calls Analysis
Download Analysis
View Detail
Additional Functions
Testing Appeon Web applications with LoadRunner
General Limitations on Performance Testing
Testing Environment
Testing Steps
Configuring AEM
Data Preparation (for update only)
Preparing Test Cases
Recording Scripts
Modifying Scripts
Additional steps for Update operation
Parameterization of SQL statements
Playing back Script to test the correctness of scripts
Setting Scenarios
Additional steps for Update operation
Running Scenarios
Modifying the scripts of NVO
Modifying the scripts of EJB/JavaBean
Errors appear when playing back scripts with LoadRunner 8.0
The value of sessionID is null
Error message appears in script playback
Error message in Appeon Log
Failed to parameterize scripts
Out of memory error and application server shut down
Field values do not change after parameterization and playback
Runtime errors causing scenario failure
Transactions failed
Unable to connect to remote servers
Analyzing log files
Analyzing Windows application log files
Analyzing Appeon Server log files
Analyzing active transaction log
Identifying Performance Bottlenecks of Web Server and Application Server
Identifying Performance Bottlenecks of DB Server
Deadlock analysis
Identifying Performance Bottlenecks of PB application
Analyzing performance bottlenecks of PB application
Tuning: DB Server
Tuning: Excessive Server Calls
Technique #1: partitioning transactions via stored procedures
Technique #2: partitioning non-visual logic via NVOs
Technique #3: eliminating recursive Embedded SQL
Technique #4: grouping multiple server calls with Appeon Labels
Tuning: Heavy Client
Technique #1: thin-out “heavy” Windows
Technique #2: thin-out “heavy” UI logic
Manipulating the UI in loops
Triggering events repeatedly
Performing single repetitive tasks
Initializing “heavy” tabs
Using ShareData or RowsCopy/RowsMove for data synchronization
Using computed fields
Using DataWindow expressions
Using complex filters
Using RowsFocusChanging/RowsFocusChanged events
Technique #3: offload “heavy” non-visual logic
Tuning: Large Data Transmissions
Technique #1: retrieving data incrementally
For Oracle database server
For all other database servers
Technique #2: minimizing excessive number of columns

Technique #1: partitioning transactions via stored procedures

Imagine your PowerBuilder client contains the following code:

long ll_rows, i
decimal ldec_price, ldec_qty, ldec_amount

ll_rows = dw_1.retrieve(arg_orderid)
for i = 1 to ll_rows
    dw_1.SetItem(i, "price", dw_1.GetItemDecimal(i, "price")*1.2)

if dw_1.update() < 0 then
end if

for i = 1 to ll_rows
        ldec_price = dw_1.GetItemDecimal(i, "price")
        ldec_qty = dw_1.GetItemDecimal(i, "qty")
        if ldec_price >= 100 then
                ldec_amount = ldec_amount + ldec_price*ldec_qty
        end if

ll_rows = dw_2.Retrieve(arg_orderid)
dw_2.SetItem(dw_2.GetRow(), "amount", ldec_amount)

If dw_2.update() = 1 then
end if

This is not only problematic from a runtime performance perspective since there would be numerous server calls over the WAN, but also it could result in a "long transaction" that would tie up the database resulting in poor database scalability.

The business logic and the data access logic (for saving data) are intermingled. When the first "Update( )" is submitted to the database, the related table in the database will be locked until the entire transaction is ended by the "Commit( )". The longer a transaction is the longer other clients must wait, resulting in fewer transactions per unit of time.

To improve the performance and scalability of the application, the above code can be partitioned in two steps:

  1. First, move the business logic (or as much possible) outside of the transaction. In other words, the business logic should appear either before all Updates of the transaction or after Commit of the transaction. This way the transaction is not tied up while the business logic is executing.

  2. Second, partition the transaction whereby all the Updates are moved into a stored procedure. The stored procedure will be executed on the database side and only return the final result. This would eliminate the multiple server calls from the multiple updates to just one server call over the WAN for saving all the data in one shot.

It is generally best to actually divide the original transaction into three segments or procedures: "Retrieve Data", "Calculate" (time-consuming logic), and "Save Data". The "Retrieve Data" procedure retrieves all required data for the calculation. This data usually would be cached in a DataWindow(s) or a DataStore(s). In the "Calculate" procedure, the data cached in DataStore will be used to perform the calculation instead of retrieving data directly from the database. The calculation result would be cached back to a DataStore and then saved to the database by the "Save Data" procedure.

Example of the new PB client code partitioned into three segments and invoking a stored procedure to perform the Updates:

long ll_rows, i
decimal ldec_price, ldec_qty, ldec_amount
//Retrieve data
ll_rows = dw_1.retrieve(arg_orderid)
//Calculate (time-consuming logic)
for i = 1 to ll_rows
    dw_1.SetItem(i, "price", dw_1.GetItemDecimal(i, "price")*1.2)

for i = 1 to ll_rows
        ldec_price = dw_1.GetItemDecimal(i, "price")
        ldec_qty = dw_1.GetItemDecimal(i, "qty")
        if ldec_price >= 100 then
                ldec_amount = ldec_amount + ldec_price*ldec_qty
        end if

dw_2.SetItem(dw_2.GetRow(), "amount", ldec_amount)
//Save data
declare UpdateOrder procedure for up_UpdateOrder @OrderID = :arg_orderid,
@amount = :ldec_amount;
execute UpdateOrder;

Example of code for the stored procedure to Update the database:

create procedure up_UpdateOrder(
@orderid integer, 
@amount decimal(18, 2)
update order_detail set price = price*1.2
where ordered = @orderid

if @@error <> 0
        return dba.uf_raiseerror()

update orders set amount = @amount
where ordered = @orderid

if @@error <> 0
        return dba.uf_raiseerror()


In summary, with the above performance optimization technique, the performance and scalability is improved since the transaction is shorter. The server call-inducing Updates are all implemented on the server-side rather than the client-side, improving the response time. Secondly, moving the business logic out of the transaction further shortens the transaction. If the business logic cannot be moved out of the transaction, one may want to consider implementing the business logic together with the transaction as a stored procedure. In summary, shorter transactions equals better scalability and faster performance.