Thursday 28 November 2019

Db2 and Db2 on Cloud - Technical Advocate Newsletter - December 2019

Db2 & Db2 on Cloud
Technical Advocate Newsletter
December 2019


    Announcement - IBM Db2 Version 11.1 Mod Pack 4 Fix Pack 5 is Now Available!

    Fix Pack Summary of Changes: https://ibm.co/2tSYbby
    Db2 11.1.4.5 Download Page: https://ibm.co/2DrBFgM
    Db2 11.1.4.5 High Impact and Pervasive (HIPER) APAR fixes: https://ibm.co/2L0Xfg4
    Db2 11.1.4.5 Full APAR Fix List: https://ibm.co/2L0Xfg4
    Db2 11.1.4.5 Client Drivers Download Page: https://ibm.co/2QZayS8


    Blog - Simplifying and Accelerating New Database Applications with Db2 11.5 and External Tables

    The External Table capabilities will simplify existing applications and provide new ways for externally stored data to be accessed by applications without the overhead of having to load all data into the database.
    https://ibm.co/2qRV6fS


    Webinar - Introducing the NEW Db2 Data Management Console

    Join Peter Kohlmann (IBM Product Manager), as he introduces the Db2 Data Management Console that is built for your enterprise and your on-premises Db2 databases. Your whole team can work together to manage, monitor, and receive alerts across hundreds of Db2 databases from a single screen. This is the next step in the journey we started this year with Db2 Big SQL, Db2 Warehouse on Cloud and Cloud Pak for Data.
    December 19, 2019, 12pm EST: https://register.gotowebinar.com/register/3263147504100628236?source=dvshome
    (Replay will be made available afterward).



    Support - Overview of IBM Support Framework, Guidelines, and Escalation methods

    Please review the IBM Support guidelines for a description of the IBM support framework; how to get setup for IBM support; methods to help yourself via watson search; and how to escalate a support case.
    https://www.ibm.com/support/pages/ibm-support-guide


    Webinar - Transform Your Data With MultiCloud and AI

    Join IBM and industry analysts for a panel discussion on how to transform your data with AI and multicloud deployment options. AI not only gives you a smarter database, it optimizes how people work within your organization, automates decision-making, and enables new business models. And all of this can be accomplished on your cloud of choice. In this webinar, you will learn: How the changing cloud landscape ; mpacts data management; How multicloud improves data accessibility and flexibility. How to modernize data in the cloud with an AI powered database.
    Register to watch the replay: http://bit.ly/2L2s6Jw


    Conferences & Summits & User Groups

    Over the years, IDUG has become known for hosting dynamic conferences packed with the cutting-edge content you need. This year will be no exception. Be sure to make your plans to attend. With great sessions, keynote speakers, workshops, and more, you won't want to miss this event.
    IDUG Conference North America, Dallas TX, Jun 7-11, 2020: https://www.idug.org/p/cm/ld/fid=2059
    IDUG Conference Europe, Edinburgh, Scotland, Oct 25-29, 2020: https://www.idug.org/p/cm/ld/fid=2149

    Upcoming Regional User Groups:

    Architecture Overview - High-Level Overview of Db2 Architecture and New Features in Version 11.5

    Keri Romanufa, IBM Db2 Chief Architect, provides an architectural overview of Db2, both row and column oriented, and describes new feature, function and best-practices in Version 11.5.
    https://www.youtube.com/watch?v=Lkr0Er_IhV4&feature=youtu.be


    Webinar - Db2, The AI Database, and managing data efficiently in a hybrid cloud world

    IBM Executives Matthias Funke and Thomas Chu will show you how to modernize data architectures, avoid lock-in and drive economic value across a hybrid cloud data architecture with the AI database that both leverages and supports AI.
    January 9, 2020, 12pm EST: 
    https://register.gotowebinar.com/register/6989060358631997196?source=dvshome
    (Replay will be made available afterward).


    Reminder - End of Support (EOS) For Db2 Version 10.5 is April 30, 2020
    Effective April 30, 2020, IBM will withdraw support for Db2 Version 10.5. Extended support will be available beyond this date.
    https://ibm.co/2NCGTNi


    Webinar - Theory to Practice: HADR in the Real World

    Have you ever seen the "poison pill" message in your Db2 diagnostic log just before Db2 crashes to prevent split brain? Have you ever needed to know before implementing HADR or changing the SYNCMODE what the performance impact might be on your primary database? Join Ember Crooks (IBM Champion and IBM Gold Consultant) to learn the planning and decisions behind an HADR implementation and the tools to make your work with HADR successful. Hear stories from HADR challenges and problems in the real world.
    January 23, 2020, 12pm EST:
    https://register.gotowebinar.com/register/7532286172961912589?source=dvshome
    (Replay will be made available afterward).


    Vlog - Expensive Oracle Renewal Looming? Move to DB2!

    The Fillmore Group is inviting organizations frustrated with rising Oracle costs to join us for a webinar focusing on the two primary motivations behind DB2 ...
    http://www.channeldb2.com/video/oracle-renewal-looming-consider-db2-2017-04-06



    _____________________________________________________________________________________________
    Roadmaps - Db2 Development Roadmaps are Accessible to Everyone

    Curious about the feature and function committed to upcoming versions of Db2? You can now explore live development roadmaps for Db2 and the IBM Analytics family of products. This content is live and subject to change.
    https://ibm-analytics-roadmaps.mybluemix.net/

    Db2 Community - Follow the Db2 Developer and Administrator Community
    Share. Solve. Do More.
    https://developer.ibm.com/data/db2/

    Follow-us! - Stay up to date on the latest news from the Db2 team
    IBM Support Community: (http://ow.ly/rPsM30fHnwI)
    IBM Db2 Twitter: (
    https://twitter.com/IBM_Db2luw)
    developerWorks Answers (forum): 
    https://developer.ibm.com/answers/topics/
    Thoughts from Db2 Support (blog):
     https://www.ibm.com/developerworks/community/blogs/IMSupport?lang=en
    Db2 Technical Advocacy Wiki: 
    https://www.ibm.com/developerworks/community/groups/community/Db2_Technical_Advocacy_Program

Friday 4 October 2019

Db2 Persistent Diagnostic Data - Use Case: Isolating high CPU operations

- David Sciaraffa, Software Engineering Manager – IBM Db2


The Db2 Persistent Diagnostic Data scripts (available here) collect various Db2 and Operating System diagnostic information, and retain this info for a period of time, allowing for basic triage of many types of issues. Information is collected about every minute (by a script I often call the 'minutely script'), and additional info about every hour (by a script I often call the 'hourly script').

The diagnostic information is sometimes raw in nature, and thus problem triage often requires various scraping and conjugation of the data.


In this particular customer scenario that I recently engaged in, we have a situation where CPU utilization on the database server host spiked for a moderate period of time. We try to narrow the cause of the CPU spike using the information collected by the Db2 Persistent Diagnostic Data scripts.


Examining the vmstat data (collected by the minutely scripts), we can see the user-cpu spike start at approximately 18:15, with user-cpu usage jumping from about 25% to 80+%

$ grep "" OS_vmstat*
OS_vmstat.[hostname].20190917.181501.txt:procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
OS_vmstat.[hostname].20190917.181501.txt: r b swpd free inact active si so bi bo in cs us sy id wa st
OS_vmstat.[hostname].20190917.181501.txt:27 0 0 484832 41316092 22476632 0 0 144 48 1 1 4 3 92 0 0
OS_vmstat.[hostname].20190917.181501.txt:22 1 0 477076 41319924 22478360 0 0 1048 948 38641 59758
80 9 10 0 0
OS_vmstat.[hostname].20190917.181501.txt:22 0 0 510836 41284108 22482420 0 0 968 1072 36293 51284
82 7 11 0 0
OS_vmstat.[hostname].20190917.181501.txt:25 0 0 491156 41298248 22482316 0 0 1668 1196 36069 45501
83 8 8 0 0
OS_vmstat.[hostname].20190917.181501.txt:23 0 0 616992 41242172 22410940 0 0 1928 1468 35699 37067
84 9 7 0 0
OS_vmstat.[hostname].20190917.181501.txt:23 0 0 598428 41254200 22412968 0 0 2196 14028 32377 39835
84 8 8 0 0
OS_vmstat.[hostname].20190917.181501.txt:20 0 0 686944 41206804 22371564 0 0 1918 1572 35247 42502
80 9 11 1 0
OS_vmstat.[hostname].20190917.181501.txt:28 0 0 684656 41212132 22367500 0 0 1952 1664 32867 42681
78 9 13 1 0
OS_vmstat.[hostname].20190917.181501.txt:31 0 0 680500 41216784 22366940 0 0 2648 2688 38645 45578
81 9 9 1 0
OS_vmstat.[hostname].20190917.181501.txt:27 0 0 672308 41223868 22366852 0 0 9688 3940 39261 48969
83 9 7 1 0

Next I confirmed that the cpu spike was associate with the db2sysc database server process, by comparing top data from the two time periods:
$ less OS_top.[hostname].20190917.180001.txt
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 8621 ipportd1  19  -1 38.4g  13g 7.6g S 118.8 21.0  19294:30 db2sysc
13443 root      RT   0  637m  65m  48m R 22.3  0.1   2786:46 corosync
12122 root      15  -5  422m  46m 1136 S 14.8  0.1   2067:30 tesvc
12564 root      20   0  328m 290m 2252 S 14.8  0.5   1912:53 rtvscand
 5962 root      20   0  6184 1020  336 S 13.0  0.0   1551:54 symcfgd
38663 ipportd1  20   0 15684 1912  884 R  7.4  0.0   0:00.06 top
38766 caapm     20   0  122m 3508 1932 S  5.6  0.0   0:00.03 perl
11857 root      20   0 20.1g 760m  10m S  3.7  1.2 738:29.34 java
$ less OS_top.[hostname].20190917.181502.txt
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 8621 ipportd1  19  -1 38.3g  15g 9.8g S 1853.9 24.3  19363:32 db2sysc
38942 ipportd1  20   0  339m  51m  13m R 32.4  0.1   0:00.26 db2bp
39244 ipportd1  20   0  250m  15m  10m R 25.2  0.0   0:00.14 db2
13443 root      RT   0  638m  66m  49m S 18.0  0.1   2788:40 corosync
39285 root      19  -1 36580 1748  904 R 16.2  0.0   0:00.09 clulog
12122 root      15  -5  422m  46m 1136 S 12.6  0.1   2068:48 tesvc
12564 root      20   0  328m 290m 2252 S 10.8  0.5   1914:09 rtvscand
39190 ipportd1  20   0 15684 1924  884 R 10.8  0.0   0:00.10 top
 5962 root      20   0  6184 1020  336 S  9.0  0.0   1552:56 symcfgd
  358 root      20   0     0    0    0 S  3.6  0.0   6:12.15 kswapd1
11857 root      20   0 20.1g 760m  10m S  3.6  1.2 738:53.95 java
12135 ipporta1  19  -1 52.0g  24g  21g S  3.6 39.8   1227:27 db2sysc
  228 root      20   0     0    0    0 S  1.8  0.0   2:36.82 kblockd/2
  357 root      20   0     0    0    0 S  1.8  0.0   6:29.98 kswapd0
12223 caapm     20   0  187m  54m 1596 S  1.8  0.1 169:17.21 sysedge



Next, I examined the cumulative user-cpu values of all the Db2 threads in the db2pd_edus output, between a time frame just before the cpu spike (18:00) and during the cpu spike (18:15).
However, I did not find any EDU with a very sharp increase in cumulative user-cpu during this time. Mostly just small increases between the two time-frames:
$ diff db2pd_edus.inst.[hostname].20190917.180002.txt db2pd_edus.inst.[hostname].20190917.181504.txt  | less
  EDU ID    TID                  Kernel TID           EDU Name                               USR (s)         SYS (s)
  
  2770      140698158360320      30841                db2agent (WPJCR) 0                   525.340000    24.820000
  2769      140698162554624      29837                db2agent (SGROUPDB) 0                 10.320000     2.510000
  2768      140698166748928      21556                db2agent (WPJCR) 0                   360.730000    20.310000
  2767      140698170943232      21553                db2agntdp (WPREL   ) 0                 9.300000     2.270000
---
  2770      140698158360320      30841                db2agent (WPJCR) 0                   531.230000    25.330000
  2769      140698162554624      29837                db2agent (SGROUPDB) 0                 10.840000     2.630000
  2768      140698166748928      21556                db2agent (WPJCR) 0                   398.970000    21.970000
  2767      140698170943232      21553                db2agent (WPJCR) 0                    10.080000     2.320000
...etc...
Next, I examine the db2pd-utilities output, and confirmed that there were no utilities (such as REORGS or BACKUPS running during this time frame).
$ ls -l db2pd_utilities*

-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:00 db2pd_utilities.[hostname].20190917.180004.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:03 db2pd_utilities.[hostname].20190917.180305.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:06 db2pd_utilities.[hostname].20190917.180605.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:09 db2pd_utilities.[hostname].20190917.180904.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:12 db2pd_utilities.[hostname].20190917.181204.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:15 db2pd_utilities.[hostname].20190917.181507.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:18 db2pd_utilities.[hostname].20190917.181812.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:21 db2pd_utilities.[hostname].20190917.182113.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:24 db2pd_utilities.[hostname].20190917.182414.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:27 db2pd_utilities.[hostname].20190917.182707.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:30 db2pd_utilities.[hostname].20190917.183006.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:33 db2pd_utilities.[hostname].20190917.183306.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:36 db2pd_utilities.[hostname].20190917.183605.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:39 db2pd_utilities.[hostname].20190917.183904.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:42 db2pd_utilities.[hostname].20190917.184205.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:45 db2pd_utilities.[hostname].20190917.184505.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:48 db2pd_utilities.[hostname].20190917.184804.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:51 db2pd_utilities.[hostname].20190917.185105.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:54 db2pd_utilities.[hostname].20190917.185405.txt
-rwxrwxr-x    1 ecuunpck swsupt          410 Sep 18 00:57 db2pd_utilities.[hostname].20190917.185705.txt

Next, I compared the db2pd_agents between the same two time frames. I can see a small increase in the number of active agents (ie. an increase in the number of database connections).
$ diff  db2pd_agents.inst.[hostname1].20190917.180003.txt db2pd_agents.inst.[hostname1].20190917.181807.txt | less
< Active coord agents: 797
< Active agents total: 797
< Pooled coord agents: 78
< Pooled agents total: 78
---
> Active coord agents: 828
> Active agents total: 828
> Pooled coord agents: 47
> Pooled agents total: 47
and I also see some cases where the cumulative rows-read values increase sharply, such as this example, where AppHandl 3698 read about 31M rows between the two time frames:

$ diff  db2pd_agents.inst.[hostname1].20190917.180003.txt db2pd_agents.inst.[hostname1].20190917.181807.txt | less
Address              AppHandl [nod-index] AgentEDUID Priority ... Rowsread  Rowswrtn ...
0x00007FFCEC3F6900   3695     [000-03695] 1123       0 Coord  ... 11560     0        ...
0x00007FFCEBBEB4C0   3696     [000-03696] 1086       0 Coord  ... 54716     0        ...
0x00007FFCEB8985C0   3698     [000-03698] 1081       0 Coord  ... 200138526 312789   ...
---
Address              AppHandl [nod-index] AgentEDUID Priority ... Rowsread  Rowswrtn ...
0x00007FFCEC3F6900   3695     [000-03695] 1123       0 Coord  ... 0         0        ...
0x00007FFCEBBEB4C0   3696     [000-03696] 1086       0 Coord  ... 0         0        ...
0x00007FFCEB8985C0   3698     [000-03698] 1081       0 Coord  ... 231755161 334148   ...

Next, using the AgentEDUID value of 1081 for this agent, I examined the db2pd-apinfo data to see what this agent was executing:
$ less db2pd_apinfo.[hostname1].20190917.181507.txt 

Application :  
Address :                0x00007FFCEBD00080  
AppHandl [nod-index] :   3698     [000-03698]  
TranHdl :                25  
Application PID :        0  
Application Node Name :  [ipaddr]
IP Address:              [ipaddr]
Connection Start Time :  (1567261562)Sat Aug 31 10:26:02 2019  
Client User ID :         n/a  
System Auth ID :         APPWPS  
Coordinator EDU ID :     1081 ...  
Last executed statements :    
Package cache ID :        0x0000021900000002    
Anchor ID :               537    
Statement UID :           2    
QL Type :                Dynamic    
Statement Type :          DML, Insert/Update/Delete    
Statement :               DELETE FROM JCR.ICMSTJCRREMOVEHLP WHERE WSID = ? AND LID = ?
So we might consider this a suspect query... but let's keep looking....

Next, I input the MON_GET_ACTIVITES output from the 8:15 cpu spike time-frame into a spreadsheet, however it did not reflect the activities of the 800+ agents in the database, it only contains 4 records, none of which show high total_cpu_time values, or large rows read or written values, or query cost estimates.


So I suspect whatever is causing the CPU increase is not a single long execution of a large query which was captured in the minutely data collections, but rather many successive executions of a single query or small set of queries, which may not have been executing at the moment when the minutely data is collected every minute.

Next, I examined the mon_get_pkg_cache_stmnt output from the hourly data collection script.
I crafted the following awk query to calculate the average cpu time per query execution from the mon_get_pkg_cache_stmt() output.
I do see some relatively expensive individual queries which are compounded by many executions:

$ awk -F' ' '{avg=0; if($9+0 != 0 && $13+0 != 0){ $avg=$13/$9 }; print "pkg_sch:" $3 ",pkg_nam:" $4 ",pkg_ver:" $5 ",sec_num:" $6 ",num_execs:" $9 ",total_cpu_time:" $13 ",average_cpu_time:" $avg;}' qry3.out | less 
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:155337498,total_cpu_time:78672618581,average_cpu_time:506.463
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:2447636,total_cpu_time:25020306619,average_cpu_time:10222.2
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:2853649,total_cpu_time:14116987481,average_cpu_time:4947
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:25452,total_cpu_time:7238807540,average_cpu_time:284410
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:109301657,total_cpu_time:4889890787,average_cpu_time:44.7376
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:41021,total_cpu_time:4850725299,average_cpu_time:118250
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:38909250,total_cpu_time:3761535928,average_cpu_time:96.6746
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:58972,total_cpu_time:3707165379,average_cpu_time:62863.1
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:11536,total_cpu_time:3488732997,average_cpu_time:302421
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:17642026,total_cpu_time:3051013622,average_cpu_time:172.94
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:17226,total_cpu_time:2038784887,average_cpu_time:118355
...
pkg_sch:-,pkg_nam:-,pkg_ver:-,sec_num:-,num_execs:73,total_cpu_time:120223152,average_cpu_time:1.64689e+06
...

Find the entry with this total_cpu_time value within the mon_get_pkg_cache_stmt data files, we see this particular query is:

DELETE FROM jcr.WCM_USER_SHORTCUT t0 WHERE (t0.VPID = ? AND t0.LOCATION_DATA LIKE ? ESCAPE '\') AND t0.KIND = ?

At this point, I'd consider these two DELETE statements to be suspects, and recommend using the db2batch bench-marking tool against these statements to determine their execution metrics (rows read, updated, written, etc) and whether any possible tuning is required.


Thursday 27 June 2019

Db2 and Db2-on-Cloud - Technical Advocate Newsletter – June 2019



Db2 & Db2 on Cloud
Technical Advocate Newsletter
June 2019

    Announcement - IBM Db2 Version 11.5 is Now Available!!

    We're excited to announce the General Availability of Db2 Version 11.5!

    Exciting new features and improvements include: 
    External Table Support including Object Store support; SQL Compatibility including CREATE/DROP TABLE .. IF EXISTS; Oracle Compatibility; New Monitoring Metrics including aggregation across super classes, and data on SQL statement failures; Client Enhancements; Support for Compiled SQL PL scalar functions in DPF; Support for 4K sector sizes on disk drives; Built-in Spatial Support; Automatic Collection of Column Group Statistics; WLM Enhancements including simplified threshold setup and dropping of service classes; ETL optimizations for BLU Including including Vectorized Insert/Update, Optimized Batch Insert; BLU Compression Enhancements including Automatic Recompression and Vectorized Dictionary Create; LOB support with Columnar Tables; Performance Enhancements for Columnar Queries including boosts for sorts and correlated subqueries; pureScale enhancements including table free space management, performance with Range Partitioned tables, and cross-member currently committed semantics; Enhanced Security for pureScale; and more....Also, now available as a Technical Preview: Advanced Log Space Management; Faster DB Startup; Machine Learning Optimizer; Block Chain Federation Wrapper; Schema Level Authorization; Db2 Augmented Data Explorer; Replication capability to support Columnar tables.
    What's New in Version 11.5: https://ibm.co/2RDfri9
    Db2 11.5 Download Page: 
    https://ibm.co/2RC6Dca
    Db2 11.5 on DockerHub: https://hub.docker.com/r/ibmcom/db2
    Db2 11.5 Client Drivers Download Page: https://www-01.ibm.com/support/docview.wss?uid=swg21385217 


    Announcement - IBM Db2 Event Store 2.0 is Now Available!

    Db2 Event Store 2.0 is a major upgrade from previous versions bringing exciting new features in support of the Fast Data needs of our clients.
    Highlights of what's new: Powered by the IBM Common SQL Engine adds the industry's most sophisticated SQL engine to Fast Data workloads, including common SQL across the entire Db2 Family, and Improved query performance. Queries up to 50x faster with improved query optimization and parallel query execution. Multi-tiered caching of synopsis and data pages; Enhanced Time series and Geospatial support, Rich functional support through IBM Research library; Easier to use, Integrated Backup and Restore, Enhanced problem determination tooling, JBDC/OBDC Standard Connectivity.


    Overview of Db2 Event Store 2.0: https://www.ibm.com/support/knowledgecenter/SSGNPV_2.0.0/local/welcome.html


    Webinar - Db2 11.5! Breaking News from IBM Toronto Lab

    Kelly Rodger, IBM Senior Manager, walks us through the new and exciting release of Db2 for LUW! Db2 V11.5 has lots of new features and capabilities, license changes, and more.
    Watch the replay: 
    https://www.dbisoftware.com/blog/db2nightshow.php?id=780


    New Offering - Db2 Warehouse on Cloud Flex Plans

    The set of available Db2 Warehouse on Cloud plans is expanding, offering you new choices of cloud infrastructure and plan size.

    First, Db2 Warehouse on Cloud can now be deployed on the Amazon Web Services public cloud. There are two Flex offerings to choose from:
    • The “Flex” plan is ideal for storage-dense workloads, inexpensive query of large data sets, and a development/test environment for cloud.
    • The “Flex Performance” plan is ideal for high-performance production workloads that prioritize compute performance over storage density.
    Both plans offer high availability (HA), self-service backup and restore, and unlimited backups to AWS S3. For more information, see:https://www.ibm.com/blogs/bluemix/2019/03/db2-warehouse-flex-comes-to-aws/ 

    Second, we've introduced a new Db2 Warehouse on Cloud Flex One plan on IBM Cloud. Like Flex and Flex Performance, it delivers independent scaling of storage and compute and self-service backup and restore, but in a smaller starter configuration that you can scale as your needs grow. Configurations begin at 40GB disk storage and 6 cores.  For more information, see:https://www.ibm.com/cloud/blog/db2-warehouse-flex-one  


    Webinar - Db2 Problem Determination - a 3 HOUR Class!

    He's Back! Pavel Sustr, IBM Senior Manager, returns to give us a THREE HOUR class on Db2 LUW Problem Determination. Wow!
    Watch the replay: 
    https://www.dbisoftware.com/blog/db2nightshow.php?id=783


    Announcement - IBM Db2 on Cloud for AWS

    IBM has announced the launch of a Technical Preview for customers looking to run a fully managed Db2 database on Amazon AWS. It will be available for deployment in early July, but is available for purchase now via the HDMP Monthly Subscription. This allows customers to launch a fully managed Db2 Advanced (formerly AESE) database totally hassle-free. Patching, security and backups are all managed for you.

    https://developer.ibm.com/answers/questions/488671/where-is-information-about-db2-and-other-cloud-ven.html


    New Offering - IBM Db2 Developer-C (no license fee) Now Available on the Amazon marketplace

    The IBM Db2 AMI (Amazon Machine Image) for Developer-C is now publicly listed and available on the AWS marketplace. The AMI allows users to launch and connect to the readily made available database and will be able to leverage Db2 developer-C functionalities to the core.

    The AMI is based on Db2 Developer-C v11.1.4.4 and is AWS free-tier eligible. In other words, users with an AWS trial account can launch the AMI on EC2 micro instances for free. For non-free tier configurations, users will pay for AWS resources only.
    https://aws.amazon.com/marketplace/pp/B07NGL6BWC?qid=1549898607908&sr=0-1&ref_=srh_res_product_title


    Announcement - IBM Db2 on Cloud Introduces Point-in-Time Restore and Improves Cross-Regional Backups

    IBM Db2 on Cloud now makes it simple to restore to an exact point in time, via self-service. Working with backups is a critical feature to ensure you can rapidly get your database back to how you’d like it. Capabilities include: Standard daily encrypted backups with log shipping; Cloud Object Storage; Time travel queries and Db2 tools. For details, see: 
    https://www.ibm.com/cloud/blog/announcements/ibm-db2-on-cloud-introduces-new-features


    Conference & Summits - International Db2 Users Group Conferences (North America and Europe)

    Over the years, IDUG has become known for hosting dynamic conferences packed with the cutting-edge content you need. This year will be no exception. Be sure to make your plans to attend. With great sessions, keynote speakers, workshops, and more, you won't want to miss this event.
    IDUG Seminar in Sao Paulo, Brazil, August 20, 2019https://www.idug.org/p/cm/ld/fid=1951
    IDUG Seminar in Mexico City, Mexico, August 22, 2019: https://www.idug.org/p/cm/ld/fid=2011
    IDUG Conference Australia, Sept 12-13, 2019 in Melbourne, and Sept 17-18 in Canberra: https://www.idug.org/au
    IDUG & IBM Data Tech Summit, Toronto, ON, Canada, Sept 23-24, 2019: https://www.idug.org/DTSToronto2019
    IDUG & IBM Data Tech Summit, San Jose, CA, USA, Oct 2-4 2019: https://www.idug.org/p/cm/ld/fid=2050
    IDUG Conference Europe, Rotterdam, Netherlands, Oct 20-24,:https://www.idug.org/p/cm/ld/fid=1634


    Blog - Increasing data accessibility through lock avoidance via across-member Currently Committed semantics in Db2 pureScale

    In Db2 pureScale environments Version 11.1 and prior, support concurrently committed semantics was limited, a CS isolation row reader could bypass an in-flight row updater and retrieve the currently committed version of a record from the recovery log stream only when the row reader and row updater (lock holder) resided on the same member. Starting in Version 11.5, a CS isolation row reader in a Db2 pureScale environment is capable of retrieving the currently committed version of a record was when either the row reader and row updater (lock holder) resided on the same member, or on different members.http://thinkingdb2.blogspot.com/2019/06/increasing-data-accessibility-through.html


    Blog - Installing Db2 the Easy Way: Docker

    Ian Bjorhovde -- As DBAs, one of the tasks that we do on a fairly regular basis is install new Db2 code on the database server. Although it has become routine for me, installing new code on a server can be surprisingly complex. Docker provides a simple alternative.https://www.idug.org/p/bl/et/blogaid=835


    Survey - Lab Advocate Engagement Survey 
    If you haven't already, please help us improve the lab advocate program by completing the lab advocate engagement survey. 
    http://bit.ly/2019_Db2_lab_advocate_customer_survey

    _______________________________________________________________________
    Roadmaps - Db2 Development Roadmaps are Now Accessible to Everyone

    Curious about the feature and function committed to upcoming versions of Db2? You can now explore live development roadmaps for Db2 and the IBM Analytics family of products. This content is live and subject to change.
    https://ibm-analytics-roadmaps.mybluemix.net/

    Db2 Community - Follow the Db2 Developer and Administrator Community
    Share. Solve. Do More.
    https://developer.ibm.com/data/db2/
    End of Support (EOS) - For Db2 Version 10.5 is April 30, 2020
    Effective April 30, 2020, IBM will withdraw support for Db2 Version 10.5. Extended support will be available beyond this date.
    https://ibm.co/2NCGTNi


    Follow-us! - Stay up to date on the latest news from the Db2 team
    IBM Support Community: (http://ow.ly/rPsM30fHnwI)
    IBM Db2 Twitter: 
    (https://twitter.com/IBM_Db2luw)
    developerWorks Answers (forum): 
    https://developer.ibm.com/answers/topics/
    Thoughts from Db2 Support (blog):
     https://www.ibm.com/developerworks/community/blogs/IMSupport?lang=en
    Db2 Technical Advocacy Wiki: 
    https://www.ibm.com/developerworks/community/groups/community/Db2_Technical_Advocacy_Program

Translate