Hello ! My name is Akshay Ahluwalia and I am currently working as a Senior Data Analyst, which serves as a fun profile in the field of web analytics. Apart from my work, I am a big data enthusiast, taking on projects focusing Hadoop. Passionate about learning front end frameworks, developing web applications and constantly taking on code challenges, keeping me upto date with the back-end programming languages at the same time!
As a part of a software development team, I created an automated testing framework in C# / Java, to authenticate large numbers of electricity and gas bills for customers, ensuring proper billing standards are applied. Our tests consisted of a final PDF file as an output, that displayed the test case results and all the abnormalities found with the billing techniques. Consuming web services was as a major part for this effort.
I had the chance to work with some great technologies while working with this medicare giant which involved monitoring IBM Datapower for debugging network traffic calls. Being a part of the production support team, I was responsible for tackling ad-hoc fixes on the site related to front-end design, customer navigation issues and tracing log calls using Splunk. It served as a great learning curve for me as I was exposed to front end as well as back end tecnologies at the same time.
Masters in Information Systems and Operations Management
- Quantative Analysis
- Statistical Analysis for Managerial Decisions
- Web Programming
- Business Intelligence and Analytics
- Wireless Networks
- Software Testing and Quality Assurance
- Finance and Account Management
Bachelor of Engineering in Information Technology
- Core Java Development
- Database Systems
- Artificial Intelligence
- Information Network Security
- Systems Design and Development
- Data Structures and Algorithms
- Analog and Digital Circuits
The goal of this project was to capture and analyze real-time data directly from the twitter home-feed of an user. The twitter data is initially read through twitter API using twitter4j library and feeding it into Hadoop system. The data is later analyzed by implementing map reduce functions to get the tags and tweets with most occurances on the home-feed providing insights about the most active and commonnly used tags on the site.
The social media survey data consists of user logs about their interactions and feedback on a particular survey. This huge chunk of data is loaded using the PIG framework as a csv file into the hadoop system. The data set is then queried upon using PIG LATIN scripts which convert them into map-reduce jobs to analyze statistics such as the time taken by a user to answer a question, commonly used tags for answering questions and other meaningful insights.
The goal of this project was to analyze web-logs provided by a telecom industry. Hadoop MapReduce program takes the data feed and using Writable interface parses the data to be processed as individual data-points which can be used as keys to deduce insightful observations. Key Analysis such as call duration more than a minute and calls that were just made between any 2 states are processed.
The Hadoop MapReduce framework parses a weather log file analyzing the max temperature recorded for each state being listed in the weather log.