Sign for Notice Everyday    Sign Up| Sign In| Link|

Our Sponsors

    Receive Latest News

    Feedburner
    Share Us

    SPARK FOR DATA SCIENCE WITH PYTHON 2022 - Spark for Data Science with Python | Simpliv

    View: 10704

    Website https://www.simpliv.com/python/from-0-to-1-spark-for-data-science-with-python | Edit Freely

    Category Python

    Deadline: December 30, 2022 | Date: December 30, 2022

    Venue/Country: Online Courses, U.S.A

    Updated: 2018-05-07 16:17:17 (GMT+9)

    Call For Papers - CFP

    Taught by a 4 person team including 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data.

    Get your data to fly using Spark for analytics, machine learning and data science

    Let’s parse that.

    What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.

    Analytics: Using Spark and Python you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease.

    Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets.

    What's Covered:

    Lot's of cool stuff ..

    Music Recommendations using Alternating Least Squares and the Audioscrobbler dataset

    Dataframes and Spark SQL to work with Twitter data

    Using the PageRank algorithm with Google web graph dataset

    Using Spark Streaming for stream processing

    Working with graph data using the Marvel Social network dataset

    .. and of course all the Spark basic and advanced features:

    Resilient Distributed Datasets, Transformations (map, filter, flatMap), Actions (reduce, aggregate)

    Pair RDDs , reduceByKey, combineByKey

    Broadcast and Accumulator variables

    Spark for MapReduce

    The Java API for Spark

    Spark SQL, Spark Streaming, MLlib and GraphFrames (GraphX for Python)

    Using discussion forums

    Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

    We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

    The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

    We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

    It is a hard trade-off.

    Thank you for your patience and understanding!

    Who is the target audience?

    Yep! Analysts who want to leverage Spark for analyzing interesting datasets

    Yep! Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.

    Yep! Engineers who want to use a distributed computing engine for batch or stream processing or both

    Basic knowledge

    The course assumes knowledge of Python. You can write Python code directly in the PySpark shell. If you already have IPython Notebook installed, we'll show you how to configure it for Spark

    For the Java section, we assume basic knowledge of Java. An IDE which supports Maven, like IntelliJ IDEA/Eclipse would be helpful

    All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode).

    What you will learn

    Use Spark for a variety of analytics and Machine Learning tasks

    Implement complex algorithms like PageRank or Music Recommendations

    Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings

    Use all the different features and libraries of Spark : RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX

    Gmail: support@simpliv.com

    Phone no: 5108496155

    Click to Continue Reading: https://www.simpliv.com/search

    Registration Link: https://www.simpliv.com/python/from-0-to-1-spark-for-data-science-with-python

    Simpliv Youtube Course & Tutorial : https://www.youtube.com/channel/UCZZevQcSlAK689KbsrMvEog?view_as=subscriber

    Facebook Page: https://www.facebook.com/simplivllc

    Linkedin: https://www.linkedin.com/company/simpliv

    Twitter: https://twitter.com/simplivllc


    Keywords: Accepted papers list. Acceptance Rate. EI Compendex. Engineering Index. ISTP index. ISI index. Impact Factor.
    Disclaimer: ourGlocal is an open academical resource system, which anyone can edit or update. Usually, journal information updated by us, journal managers or others. So the information is old or wrong now. Specially, impact factor is changing every year. Even it was correct when updated, it may have been changed now. So please go to Thomson Reuters to confirm latest value about Journal impact factor.