One of the most common challenges organizations face with their data lakes is the inability to find, understand, and trust the data they need for deriving business value or gaining a competitive edge. Organizations are putting all of their enterprise data in a data lake over object storage like S3. In no time however, the data lake becomes swampy and unusable due redundant data copies. This increases organizational costs implicitly, since searching and indexing the data becomes difficult.
At R Systems, we’ve built an efficient process for data lake catalog using Amazon S3, Amazon Dynamo DB, AWS Lambda (server-less computing) and Amazon Elastic Search.
Director – Big Data & Analytics, R Systems
Ajay has around 23 years of experience in providing data-driven solutions to challenging business problems. He has been working as Chief Data Scientist and big data architect to evaluate organization to compete on Analytics with best possible predictive modeling and solutions. His ability to deliver valuable insights via data analytics and advanced data-driven methods is at par.
He is an avid learner of new cloud technologies and has cleared AWS Certified Solution Architect & AWS Certified Big Data Specialty certifications.
TBD, Amazon Web Services