Category: coding
-
Writing Into Dynamic Partitions Using Spark
Hive has this wonderful feature of partitioning — a way of dividing a table into related parts based on the values of certain columns. Using partitions, it’s easy to query a portion of data. Hive optimizes the data load operations based on the partitions. Writing data into partitions is very easy. You have two options:…
-
Parse Json in Hive Using Hive JSON Serde
In an earlier post I wrote a custom UDF to read JSON into my table. Since then, I have also learnt about and used the Hive-JSON-Serde. I will use the same example as before. Now, using the Hive-JSON-Serde you can parse the above JSON record as: This is really great! I can now parse more…
-
SPOJ | NICEDAY — The Day of the Competitors
Problem Contestants are evaluated in 3 competitions. We say that: A contestant A is better than B if A is ranked above B in all of the three competitions, they were evaluated in. A is an excellent contestant if no other contestant is better than A. Given the ranks of all the contestants that participated…
-
Writing UDF To Parse JSON In Hive
Sometimes we need to perform data transformation in ways too complicated for SQL (even with the Custom UDF’s provided by hive). Let’s take JSON manipulation as an example. JSON is widely used to store and transfer data. Hive comes with a built-in json_tuple() function that can extract values for multiple keys at once. But if…
-
Project Euler | The Millionth Lexicographic Permutation Of The Digits
The 24th problem of Project Euler wanted the one-millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are: This was yet another problem which I solved…
-
Project Euler | Maximum Sum Traversing Top To Bottom In A Triangle
The 18th and the 67th problems in Project Euler are one and the same. The only difference is in the input test case. The problem 18 has a smaller input and 67 has a large input. For the explanation purpose, I’ll consider problem 18. The code without any modification can be extended to 67th problem. Given a triangle of…
-
Always Specify Region When Calling DynamoDb from Hive
DynamoDb is a key-value storage store. One can query DynamoDb tables from Hive using the DynamoDBStorageHandler. It’s super easy to setup. Let’s say we have built a platform that collects data for various clients, processes the data and outputs the processed data per client. For our example, let’s say each client can be identified by…
-
CamelCase Partition Column is a Bad Idea in Hive
Outside Java code I prefer snake_case over camelCase. This is mostly a preference without any strong good reason: Without a proper IDE I find it easier to read snake_case words than camelCase words. Python’s naming convention uses snake_case for variable names. They use camelCase only for class names. Languages like MySQL, Hive, etc convert everything…
-
Reusing Hive Scripts
Amazon’s Elastic Data Pipeline does a fine job of scheduling data processing activities. It spawns a cluster and executes Hive script when the data becomes available. And after all the jobs have completed the pipeline shuts down the EMR resource and exits. Since the cluster is only created and in use while the scripts are…
-
Scrapy | Crawl WhoScored For Football Stats
Earlier, I have written code to crawl Google Play, iTunes AppStore and Goal.com websites. But every time I re-wrote the code to get content from website, parse it using BeautifulSoup while maintaining the list of crawled URLs to avoid crawling them again. This was a lot of work. A while ago I, discovered Scrapy. It’s…