HomeНаука и техникаRelated VideosMore From: DataStax

Using Spark to Load Oracle Data into Cassandra (Jim Hatcher, IHS Markit) | C* Summit 2016

9 ratings | 2124 views
Slides: https://www.slideshare.net/DataStax/using-spark-to-load-oracle-data-into-cassandra-jim-hatcher-ihs-markit-c-summit-2016 | Spark is an execution framework designed to operate on distributed systems like Cassandra. It's a handy tool for many things, including ETL (extract, transform, and load) jobs. In this session, let me share with you some tips and tricks that I have learned through experience. I'm no oracle, but I can guarantee these tips will get you well down the path of pulling your relational data into Cassandra. About the Speaker Jim Hatcher Principal Architect, IHS Markit Jim Hatcher is a software architect with a passion for data. He has spent most of his 20 year career working with relational databases, but he has been working with Big Data technologies such as Cassandra, Solr, and Spark for the last several years. He has supported systems with very large databases at companies like First Data, CyberSource, and Western Union. He is currently working at IHS, supporting an Electronic Parts Database which tracks half a billion electronic parts using Cassandra.
Html code for embedding videos on your blog
Text Comments (2)
sha p (1 month ago)
1) My tables in oracle has millions of records , can we push all those records of a table/calulated rows at a time into cassandra? should it allow it or should we do incremental , if incremental how we keep track of the pushed and yet push records ?
sha p (1 month ago)
Thanks a lot for sharing this nice experience. I have few dobuts. 1) Where/how do we run deploy this code in production ? 2) Out existing application is in Java/spring , so how to integrate this ? or should I make a new project ? 3) What version of Cassandra & Spark we shold use ? 4) Can I have github code of this sample?

Would you like to comment?

Join YouTube for a free account, or sign in if you are already a member.