Last week I had a lengthy post at oracle-l that tackles Calibrate IO, Short Stroking, Stripe size, UEK kernel, and ASM redundancy effect on IOPS Exadata which you can read here
followed by interesting exchange of tweets with Kevin Closson here (see 06/21-22 tweets) which I was replying in between games at UnderwaterHockey US Nationals 2012 which we won the championship for the B division ;) I have my awesome photo with the medal here
This post will detail on the ASM redundancy/parity effect on IOPS… if… by changing the ASM redundancy (external, normal, and high) will it decrease the workload read/write IOPS or stay as is. I’ll walk you through the step by step on how I did the instrumentation + the test case itself then end with some results and observations.
Let’s get started!
I’ve been working on a lot of good schtuff lately on the area of capacity planning. And I’ve greatly improved my time to generate workload characterization visualization and analysis using my AWR scripts which I enhanced to fit on the analytics tool that I’ve been using.. and that is Tableau.
So I’ve got a couple of performance and capacity planning use case scenarios which I will blog in parts in the next few days or weeks. But before that I need to familiarize you on how I mine this valuable AWR performance data.
Let’s get started with the AWR top events, the same top events that you see in your AWR reports but presented in a time series manner across SNAP_IDs…