21 Dec 2014

Why's my database suddenly so big?


In a Product based company, at times DB Developers don't get direct access to production boxes. Instead, an IT resource ends up managing a large swathe of Web / DB boxes. Since such a resource generally has a large field of operation, at times they need quick steps to identify the sudden high-disk usage.

In such a scenario (where Production is silo-ed out of DB Developers), correct triaging of a Database disk-usage spike is especially helpful, because a good bug-report is at times the only help that'd ensure a 1-iteration resolution.

Recently, one of our Production Database boxes hit a Nagios disk-alert and an IT personnel wanted to identify who (and specifically what) was causing this sudden spike.

From the looks of it, it clearly was a time-consuming + space-consuming Job running on the box and not much that could have been done (sans terminating it), but the following steps could have helped to prepare a bug-report of-sorts for the DB developer to identify / correct it before this happens again:
  1. Isolate whether the disk increase is because of postgres. i.e. Compare the following two disk-usage outputs:
    1. df -h
    2. du -h --max-depth=1 /var/lib/postgresql/
  2. On the psql prompt
    1. SELECT
        datname AS db_name,
        pg_size_pretty(pg_database_size(oid)) AS db_size
      FROM pg_database
      ORDER BY pg_database_size(oid) DESC
      LIMIT 10; 
    2. Connect to the Database. (Assuming that there is one large database on the system, and its name is X. In the above output, it'd be the first row, under db_name):
    3.  \c  X
    4. From Wiki: SELECT nspname || '.' || relname AS "relation",
          pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
        FROM pg_class C
        LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
        WHERE nspname NOT IN ('pg_catalog', 'information_schema')
          AND C.relkind <> 'i'
          AND nspname !~ '^pg_toast'
        ORDER BY pg_total_relation_size(C.oid) DESC
        LIMIT 10; SELECT NOW();
  3. A minor addition, but sometimes, a simple thing such as appending the current Server-Timestamp is 'very' beneficial for accurate triaging. For e.g. If a given table is known to grow very big, N minutes into processing, (assuming we have at least basic logging in place) a Developer can easily identify which method to focus on.
  4. In most cases, the query above (2.4) would give the list of tables that one is looking for:
    1. In most cases, it turns out that the Job is expected to consume processing disk-space, and if so, the DB developer probably needs to request IT for mandatory empty-disk space for each night's periodic processing. That's a simple solution and we're done.
    2. Alternatively, its possible that the Job is (unexpectedly) creating some very large tables (and probably removing it post processing). Such cases, may need optimization, which probably got aggravated purely out of DB growth. Possible, but again, we're clear about how to resolve it.
  5. However, at times the tables returned doesn't cover 'all' that could consume disk-space:
    1. The list of tables generated in Step 2.4 above covers:
      1. (Regular) Tables
      2. Temporary Tables
      3. Unlogged Tables
    2. However, it does not cover tables created within another session's transaction. More detailing on that given below:

Since Temporary Tables (within a Transaction) are NOT visible from another session, Query 2.4 when run by an administrator in another psql session, would not be able to show the temporary table. 

To find disk-consuming tables, an alternate approach may yield better results:

$ cd /var/lib/postgres/9.2/data/base/
$ ls -lR | awk '$5 > 1000000' | awk '{print "File: " $9 ",  Size: " $5}' | sed 's/\./ part-/g'

File: t22_3586845404,  Size: 819200000
File: 3586846552,  Size: 630161408
File: t24_3586846545,  Size: 1073741824
File: t24_3586846545 part-1,  Size: 1073741824
File: t24_3586846545 part-2,  Size: 1073741824

File: t24_3586846545 part-3,  Size: 559702016

Lets analyse this Shell command:
  • When run from the data/base folder of PostgreSQL it shows which file uses most disk-space across *all* databases. If you already know which database to focus on, you may want to go inside data/base/xxxx folder and run this Shell command there instead.
  • In the output we see three things:
    • File 3586846552 is a large file pointing to a (non-temporary) table
    • File t22_3586845404 is a large file, points to a *temporary* table but is less than 1Gb size
    • File t24_3586846545 is a large file, also points to a *temporary* table and is between 3Gb and 4Gb in size, (basically because each file part is a 1Gb volume) and therefore is a good contender to be researched further.

So lets investigate file t24_3586846545 further.

From a psql prompt:
postgres=# \x
Expanded display is on.
postgres=#

WITH x AS (
  SELECT trim('t24_3586846545')::TEXT AS folder_name
),
y AS (
  SELECT
    CASE
      WHEN position('_' in folder_name) = 0
        THEN folder_name::BIGINT
      ELSE substring(folder_name
        FROM (position('_' in folder_name) + 1))::BIGINT
    END AS oid
  FROM x
),
z AS (
  SELECT
    row_to_json(psa.*)::TEXT AS pg_stat_activity_Dump,
    query AS latest_successful_query,
    array_agg(mode) AS modes,
    psa.pid
  FROM y, pg_locks join pg_stat_activity psa
    USING (pid)
  WHERE relation = oid
    AND granted
  GROUP BY psa.pid,row_to_json(psa.*)::TEXT, query
)
  SELECT *
  FROM z

UNION ALL

  SELECT
    'Doesnt look like this folder (' ||
    (SELECT folder_name FROM x) ||
    ') stores data for another session''s transaction'::TEXT,
    NULL, NULL, NULL
  FROM z
  HAVING COUNT(*) = 0;

-[ RECORD 1 ]-----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
pg_stat_activity_dump   | {"datid":"2946058308","datname":"rt_ra","pid":16828,"usesysid":"16384","usename":"rms","application_name":"psql","client_addr":null,"client_hostname":null,"client_port":-1,"backend_start":"2014-12-20 18:52:47.702365+05:30","xact_start":"2014-12-20 20:01:12.630708+05:30","query_start":"2014-12-20 20:53:41.733738+05:30","state_change":"2014-12-20 20:53:41.734325+05:30","waiting":false,"state":"idle in transaction","query":"select n.nspname from pg_class c join pg_namespace n on n.oid=c.relnamespace\nwhere c.relname ='a' and n.nspname like 'pg_temp%';"}
latest_successful_query | select n.nspname from pg_class c join pg_namespace n on n.oid=c.relnamespace
                        | where c.relname ='a' and n.nspname like 'pg_temp%';
modes                   | {AccessExclusiveLock,RowExclusiveLock}
pid                     | 16828

postgres=#


The output of this SQL is probably going to be at least of some help, if the rogue table, is a table within another Transaction. I'll try to disect what this query is doing:
  • The first column is basically the entire pg_stat_activity row mangled into a JSON object.
  • The pid is the PID of the Postgres process that is serving the Connection that currently has a Lock on the table that identifies t24_3586846545. This way, we can know more about the connection that created this (large) table. (Please be very clear, that unless you know what you're doing, you shouldn't *kill* any postgres process from bash).
  • The columns 'last_successful_query' and 'modes', are probably uninteresting to the Admin, but may be of big help to the Developer.

(Obviously, a hands-on IT admin needs to replace the string (t24_3586846545) in the above SQL, with the file-name that (s)he gets the most number of times, when executing the previous shell-command).

Happy triaging :) !

20 Dec 2014

Car transfer between Indian states


After moving to Hyderabad last year, I had to bring my (Delhi Registered) car to Hyderabad for regular day-to-day use. Coupled with the fact that this was a (pretty) old car registered (still) under my dad's name, I had a genuine question as to whether I should get a car re-registered in Hyderabad.

Considering that there is a lot of intra-state movement in India, in part owing to software related employment, I am sure a lot of people are under similar doubts and thought I'd clarify a few things that I found out the hard way.

Some things were learnt by directly talking to the Regional Traffic Office - head (Attapur), some things were learnt by speaking to a few Documentation Agents, and some things purely by doing it:

First, the 'should-dos':
  • Plan well before you leave from the source city / state. Its highly under-stated and really helps in saving your time later.
  • Get an NOC from the source city / state during this 'planning period'. 
    • For e.g. you could get an NOC for Delhi as mentioned here. Other UT / States probably have similar procedures that you could confirm from the local RTO. 
    • The NOC would require the RTO office specifics of target city (Hyderabad in my case) and so you should have a rough idea which part of the city you would be staying after movement. So understanding your office location, local rent prices near the office (Housing.com / 99Acres.com could help) would help in estimating which part of the city you'd want to stay
    • This decision is important because some large cities (like Hyderabad) have zonal RTOs and all documentary transactions would be done *only* at this RTO zone office. So even if you later find that another zonal RTO is closer to your house, you would *have* to speak to the RTO mentioned in the NOC. So chose wisely, to avoid inconvenience.
  • After you have moved to the target city (~first 45 days), keep your car-movement receipt with you at all times. Just like Insurance / PUC etc... this document would allow you leverage in telling a Traffic cop that you've just moved into the City / State. So I got my car moved through a movers-n-packers and kept their bill (with transfer-date clearly visible) along-side car documents for the first month or two.
  • Irrespective of who / what my friends said, I was very clear from day one that paying tax meant that I *never* have to hide from any Traffic police official in the middle of the road. To me, traveling with family (for e.g. to a movie) needs to be tension free, and that's one thing I'd recommend you take off your shoulders, despite the cost incurred in paying that tax.
    • The decision as to whether I pay the tax, or just sell off the vehicle and buy another one, was further complicated by the fact that I was not the owner (and would have had to call in Dad from another state for any sale / resale).
    • Additionally, this was a 12 year old car. The value of reselling such a car, whose owner is in another state, and that isn't registered in Hyderabad (and running on an NOC), is so low, that you'd rather use it for yourself as long as it works, and consider selling off only when its become a burden. Since it hadn't yet become a burden, I was strongly in favour of paying up.
  • I went to the zonal RTO (to which the NOC was addressed) and asked how to pay tax. They needed the car's original purchase amount, luckily I had the original (12 years old!) car purchase documents and that meant one less hurdle for me. The older the car, the lesser is the tax, but to a surprise, the road-tax for this 12 year old Santro still came out to be a whopping ~24k. Just because I was clear about the decision, the amount was irrelevant and went ahead with the tax.
    • A great realization at the Attapur RTO was that this young chap at the 'May I Help You' help-desk was unique enough to warrant mention. On hind-sight, the only other government office, where I have hence seen such a helpful and sensitive help-desk personnel, was at the DDA Head-Office in Delhi.
    • Ironically both help-desk personnel were visually impaired, or as my friend Bharat puts it, blind. (Bharat is one too, and gets extremely irritated when anyone 'sugar-coats' the truth as anything else but 'the-bare-truth', probably I'll narrate his brave fight with the system another day too).
    • This man (Attapur office), was utterly focussed in addressing the 'next-guy-in-line', clearly reconfirm the question asked, briefly cross-check in case he has any doubts in the question, and accurately provide a practical solution at hand, which at times meant giving the bad news too.
    • To top it, he was also humble enough to accept lack-of-knowledge in specific questions but aptly guided me to the appropriate counter (window) where I could know better.
    • Hats off, and I was pleasantly surprised to see such alacrity in serving questions in a Government office.
Some additional info, for those who're keen enough:
  • Unofficially, a senior personnel at the Zonal RTO said that earlier the Traffic Personnel were very stringent about Tax documents when catching an out-of-state vehicle. I presume this was mainly because rarely anyone used to pay such a heavy amount, and therefore the cops could ask for a bigger bribe to let the vehicle go. However, off-late a group of Software Professionals approached the State Government ministry to apprise them that the Road-Tax levied, is a lifetime road-tax whereas a good chunk of software professionals in Hyderabad were in for a few years, after which they move on to another city / state / country. Logically, the system should either allow refund of remaining Tax amount (a provision that currently doesn't exist), or the system should allow a shorter Tax-Period (say 3-4 years), which again doesn't exist. Since both options were not on the table, they hinted at resolving this at the Courts. The Government seemingly relented and now (at least unofficially), the Traffic cops have been asked to lay off such cases until a formal settlement / updated rule comes into effect.
  • I spoke with another (non-government) Documentation Agent, who (without any financial benefit) advised that I do not go ahead with car re-registration. Since I made it clear that I was Software Professional who didn't plan to stay here more than 3-4 years, he advised that beyond paying up the road-tax, nothing else was recommended. This is because at times, the re-registration process takes upward of 1 - 3 years, mainly because shuffling of papers between two states takes ages. In between this period, if for some reason you need to move to a third state, you are stuck because this process is very difficult to fast-track.
  • Additionally, an old friend has been staying with his UP registration number (after paying adequate tax) for the past 5 years in Hyderabad, and has never had any issues, further re-affirming the idea that Re-registration isn't a necessity. The only exception of course is that if you intend to stay on forever, you probably want to re-sell the car away at some time, and its advisable that while you are not in a hurry, let the re-registration process take years, and by the time its time to sell off the car, your papers would have come in.
 All the best, for your learning experience :) !

15 Dec 2014

PostgreSQL Certification

Recently, someone asked for some inputs before going ahead with a PostgreSQL Certification.

From what I know, outside Japan, EnterpriseDB is the only Consultancy that offers Certification services consistently over the past few years. Having recently cleared one, I had a few things to say therein, and thought I'd rather enlist them as a full-blown post here.

Disclaimer: I reside in India, and thus the feedback below is quite possibly local to my own experience, and / or, to this geographical location. Also, probably things have changed since I gave this Certification a few months back, and so as mentioned below, YMMV:

  • Concept
    • The PostgreSQL Certification (from EnterpriseDB) is split into two parts
      • Associate Certification
        • Cost: $200
        • Scope: Basic PostgreSQL training
        • Online Training
          • Cost: $1200
          • Online Training recommended but not mandatory, to clear the Certification
      • Professional Certification
        • Cost: $200
        • Scope: Advanced PostgreSQL training
        • Online Training
          • Cost: $1200
          • Online Training mandatory, before clearing the Certification
      • Online Trainings
        • Essentially On-Demand Training sessions
          • Live-Recordings of training sessions
        • Allow at the maximum 42 days to learn at your own pace
      • Tests
        • Allow at the maximum 42 days to attempt at your own pace
  • Value
    • EnterpriseDB is (now) a well known name in the PostgreSQL consulting world. Amongst most others that I know, I've seen only this Consultancy offer Certification services consistently over the past few years.
    • Although PostgreSQL is decades old, (for a large part of the closed-source world) it is relatively nascent in the Enterprise arena, and thus there is an utter shortage of good DBAs. As is with most other job hunts, the cream (of PostgreSQL DBAs) get hand-picked from the mailing lists, some get picked-up by references, but most others have to prove their mettle in technical rounds. In such situations, when a candidate is to compete with a big-pile of resumes, a 'Certification' certainly puts you on top of the stack.
  • Cost
    • I come from India and am quite amazed at the scope of bargaining possible when trying to go for one of the available online Training courses (mind you, only trainings, the certification costs were non-negotiable). YMMV in other countries, but I certainly would recommend at least attempting negotiations, (especially, if you're paying for the certification yourself).
  • Quality
    • The Online variant of the training, are recorded sessions and having read about a few of the trainers, I think I have a fair idea why there is heavy MTI (mother tongue influence) on the Pronunciations, despite the trainer desperately trying to anglicize it. Similarly one could see repetitive grammatical / MTI related issues which come in form when the speaker isn't primarily English spoken. This isn't a show-stopper, but is something that could help in building expectations before enrolling.
      • For e.g. Instead of going to its 'backside', I'd rather go 'behind' it.
    • Another aspect of Quality is that the sessions are aimed at Novices. Its quite understandable that the trainer takes things pretty slow at places, but if you're already a DBA and aiming to just 'brush through' this training to give the certification, then one must mention that you're going to torture yourself to frustration :). Putting it more accurately, you're probably going to really miss that the video doesn't allow you to play the video at 2x or 4x speed.... (a-la Coursera).
      • Another aspect of this, is that it seems the training materials are conducted with trivial amount of editing. That sounds economical, but at times you feel that the editing team, could have clipped the odd 'aaaa... I think the internet is down, or probably some maintenance is going on at AWS...' during a hands-on class-session. These are rare, but some avoidable clips could be edited out, since it just adds to the wait time.
    • The instructors probably did a good job explaining the system. I wasn't expecting Tom :) or any of Greg or Josh to start with, but they get the job done, and that's what counts.
    • The slides on the other hand, seem to be prepared by someone knowledgeable and with some decent experience handling PostgreSQL.
    • To be fair, the training tries to cover a large ground, and in a short training like this, one obviously can't cover all aspects of such a huge topic. All-in-All, it does give essentials to get a Novice DBA going.
  • Test
    • The Test (attempt) is "not" conducted in a Pearson Vue / Prometric center as you'd probably expect. You can give the Test from anywhere. I am sure, I was quite impressed knowing about this convenience. I chose to give my attempt from my office at a lean hour, and it was done in a jiffy.
    • For those, who aren't very confident about giving the Test and instead prefer to 'Google' answers (now that they know its in a convenient location), be fair-warned, as in most cases, the people who set the tests (I know at least one!) are some pretty smart people and probably they've already accounted for this, by giving you 'just about enough time' that you'd effectively be penalizing yourself by expecting yourself to 'search' your way through for all answers.
    • On the flip-side, to recommend, I must say that Googling a trivial doubt or two, is probably okay, as long as you are 'very' sure about the timer that ticks along pretty fast. The key to this recommendation, is, as always, be very thorough about the PostgreSQL online Documentation. If you know where to find things, you're probably going to do just fine.
  • Test Interface:
    • The interface is a no-brainer, and gets out of the way to enable you to focus on the Test. To note, just like any other test-interface, I quite liked the practise-test to get you to feel the Test, before the actual attempt.
    • Ensure you try the 'check later' option to see how it works, at least I found it helpful / useful to review things at the end.
    • For those with a bad-connectivity, it does allow you to 'continue' a test, if you get disconnected. Although I didn't get to try it myself, a close friend did find himself unable to proceed in the test and promptly emailed the Training Coordinators about it. They (took a while to get back but) recommended a better internet-connection and gave him a re-attempt.
      • The only caveat with 'continuing' is that the questions already attempted remain 'fixed' and unmodifiable. The candidate was only allowed to attempt the 'remaining' questions with the 'remaining' time on the test-timer.
  • Test Results
    • The results are generally out within a few business-hours. Mine came within 2 (US) business-hours. Another friend, who similarly did attempt the test, didn't hear a word for a little more than a weekend. Her fears were proven correct when she realised that she didn't clear. Re-attempts are possible, but they'd not just cost extra (just the certification cost) but currently only three attempts are possible for 'any' Test.
  • Recommendation
    • Associate
      • Officially, a Training isn't required to give this Test.
        • However, if you take a Training session, an Test-attempt is free.
      • My recommendation? If you're a seasoned PostgreSQL DBA, you'd probably do okay without it... if you are able to brush through the training, it certainly wouldn't hurt.
    • Professional
      • Officially, a Training is mandatory.
        • As with Associate, you get an Test-attempt for free.
      • I'd say that the Training is just an extension of what the Associate Training was. 60% of the modules are similar, with extra Labs + additional modules depending on the topic.

All the best :) !

21 Nov 2014

Sqsh / FreeTDS with SQL2012 using the instance argument


This is a corner-case advise to anyone looking for a solution as to why sqsh / tsql / freetds combination is working perfectly in one SQL2012 instance but unable to login to a newly configured SQL2012 instance, the details for which just came in.

For starters, Sqsh / FreeTDS is pretty helpful if you're dumping-off of MSSQL Server to any CSV format (to, lets say dump to a PostgreSQL instance).


Symptoms
  • Sqsh / Tsql / FreeTDS is perfectly configured
    • The setup logs in to another SQLServer perfectly well
  • All this when you are able to login to both servers via SSMS from the same box
    • So it isn't a password issue
    • Neither is it a firewall issue
  • The new SQL Server keeps giving following error messages: 


 Error Messages
"Open Client Message"Layer 0, Origin 0, Severity 78, Number 41
Unable to connect: Adaptive Server is unavailable or does not exist

OR

#  /opt/freetds/bin/tsql -S "Q4DEV2" -U 'PostgreSQL' -P 'Password@123'
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
Error 20009 (severity 9):
        Unable to connect: Adaptive Server is unavailable or does not exist
        OS error 111, "Connection refused"
There was a problem connecting to the server


Login Details (as given by MSSQL Server Admin)
OLD_SERVER: q3.thatguyfromdelhi.com (this works via sqsh / tsql)
Login:PostgreSQL
Port: 1433 (default)
Password:Password@123

NEW_SERVER: q4.thatguyfromdelhi.com\SQL2012 (this isn't working via sqsh / tsql)
Login:PostgreSQL
Port:9999 (non-default)
Password:Password@123


Solution
# cat /opt/freetds/etc/freetds.conf
[Q3DEV1]
        host = q3.thatguyfromdelhi.com
        port = 1433
        tds version = 7.0
        charset = UTF-8

[Q4DEV2]
        host = q4.thatguyfromdelhi.com
        instance = SQL2012
        tds version = 7.0
        tcp = 9999 # required only if port is not default (i.e. port other than 1433)
        charset = UTF-8

The *MAGIC* to the mix is the 'instance' keyword in the configuration. As things turn out, MSSQL Server does not just identify an Instance via a Port number (like how all sane Server configurations do) they also allow a named instance, which does not work with just a PORT number.

In fact, you may have noticed that the the Port Number is missing in the Q4DEV2 configuration, and that is because PORT is mutually exclusive with INSTANCE keyword. And yes, to confirm the obvious, value of the Instance parameter should be the text after the backslash (\) in the SQLServerName field, that the MSSQL Server Admin provided earlier.

Alternatively, if you do have the new MSSQL Server running on a non-default port (lets assume port 9999) then use the 'tcp' argument as shown above.

A few useful hours wasted at my end... hope this saves someone else's !!

5 Nov 2014

PostgreSQL Explain Plan-Nodes Grid


While reading up about the various PostgreSQL EXPLAIN Plan nodes from multiple sources, I realized a clear lack of a consolidated grid / cheat-sheet, from which I could (in-one-view) cross-check how the plan nodes perform on a comparative basis. While at it, I also tried to attribute what their (good / bad) characteristics are.

Hope this (Work-In-Progress) lookup table helps others on the same path. Any additions / updates are more than welcome.

Update:
The image below is an old-snapshot of the document. The updated web document is available here.



23 Aug 2014

Magnifier in Chromebook

Off lately I've started to use the C720 (one of Acer's first Chromebooks) and am I in love with it or what.

While using the Chromebook to view an online Training that was created in a Flash environment (probably using Adobe Connect) I couldn't help myself get frustrated with the fact that I can't do simple things such as Zoom-in on the training video (which happened to be a Shell session) and had to really concentrate on a small part of the screen to read what the font wanted to say.

Most browsers that aren't from the prehistoric era, support zoom-in / zoom-out but the presentation application is 'smart' enough to ensure that that doesn't work as expected. So although the HTML does get enlarged but the presentation stays the same size (just re-centered) to the new zoomed in/out screen space.

A second option (I knew existed in Ubuntu since its early days) had a nifty feature for these situations, where you could just zoom-in and enlarge the screen (unbeknownst to the browser and all applications below the UI layer) this meant that anything rendered on the screen was magnified.

Now Chromebook was smart, pretty compact and an awesome little product, but I really thought expecting graphics related jugglery would be an overkill... and I couldn't find such a feature either.

... Until I read the 'Accessibility' feature in Chromebook (which is a simple little checkbox) and Voila!

Steps:

  1. Click User-Image (bottom right)
  2. Click Settings
  3. Click 'Advanced Settings'
  4. Under 'Accessibility' options -> Enable Screen Modifier
  5. Zoom-in using .... Ctrl-Alt- button
  6. Zoom-out using .... Ctrl-Alt- button
Am still to find things that I can't do on this ultra-cheap / ultra-light / long-lasting-battery / neat laptop!

12 Jul 2014

A Life ... that just happens to have a name.


Zohra Sehgal (1912-2014)

Ae kafir, qadr ki qadr na ho to na sahi,
Ae nadaan, Umr ki fikr na ho to na sahi,
Gar ranjish ho is sadi se ya pichla koi,
To rukh kar, Ja ban ke dikha Zohra koi.

Find Database DNS / Endpoint via SQL

How to get Database identifier using SQL Often there is a need for client programs to find "where am I logged into?". This blog po...