DEV Community

Cover image for What are the major differences between Python and R for data science?
narendra8989
narendra8989

Posted on

What are the major differences between Python and R for data science?

Both Python and R have vast software ecosystems and communities, so either language is suitable for almost any data science task. That said, there are some areas in which one is stronger than the other.

Where Python Excels

The majority of deep learning research is done in Python, so tools such as Keras and PyTorch have "Python-first" development. You can learn about these topics in Introduction to Deep Learning in Keras and Introduction to Deep Learning in PyTorch.
Another area where Python has an edge over R is in deploying models to other pieces of software. Python is a general purpose programming language, so if you write an application in Python, the process of including your Python-based model is seamless. We cover deploying models in Designing Machine Learning Workflows in Python and Building Data Engineering Pipelines in Python.
Python is often praised for being a general-purpose language with an easy-to-understand syntax
Where R Excels

A lot of statistical modeling research is conducted in R, so there's a wider variety of model types to choose from. If you regularly have questions about the best way to model data, R is the better option. DataCamp has a large selection of courses on statistics with R.
The other big trick up R's sleeve is easy dashboard creation using Shiny. This enables people without much technical experience to create and publish dashboards to share with their colleagues. Python does have Dash as an alternative, but it’s not as mature. You can learn about Shiny in our course on Building Web Applications with Shiny in R.
R's functionality was developed with statisticians in mind, thereby giving it field-specific advantages such as great features for data visualization.
This list is far from exhaustive and experts endlessly debate which tasks can be done better in one language or another. Further, Python programmers and R programmers tend to borrow good ideas from each other. For example, Python's plotnine data visualization package was inspired by R's ggplot2 package, and R's rvest web scraping package was inspired by Python's BeautifulSoup package. So eventually, the best ideas from either language find their way into the other making both languages similarly useful & valuable.

If you’re too impatient to wait for a particular feature in your language of choice, it's also worth noting that there is excellent language interoperability between Python and R. That is, you can run R code from Python using the rpy2 package, and you can run Python code from R using reticulate. That means that all the features present in one language can be accessed from the other language. For example, the R version of deep learning package Keras actually calls Python. Likewise, rTorch calls PyTorch.

Beyond features, the languages are sometimes used by different teams or individuals based on their backgrounds.

Who Uses Python

Python was originally developed as a programming language for software development (the data science tools were added later), so people with a computer science or software development background might feel more comfortable using it.
Accordingly, transition from other popular programming languages like Java or C++ to Python is easier than the transition from those languages to R.
Who Uses R

R has a set of packages known as the Tidyverse, which provide powerful yet easy-to-learn tools for importing, manipulating, visualizing, and reporting on data. Using these tools, people without any programming or data science experience (at least anecdotally) can become productive more quickly than in Python.
If you want to test this for yourself, try taking Introduction to the Tidyverse, which introduces R's dplyr and ggplot2 packages. It will likely be easier to pick up on than Introduction to Data Science in Python, but why not see for yourself what you prefer?
Overall, if you or your employees don't have a data science or programming background, R might make more sense.
Wrapping up, though it may be hard to know whether to use Python or R for data analysis, both are great options. One language isn’t better than the other—it all depends on your use case and the questions you’re trying to answer.

Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. It's high-level built-in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together.

Python is a very popular language. It’s also one of the languages that I recommend for beginners to start with. But how do you go about learning this language?

Here are some cool python tricks:

Go for built-in functions:
You can write efficient code in Python, but it’s very hard to beat built-in functions (written in C).
Use join() to glue a large number of strings:
You can use “+” to combine several strings. Since the string is immutable in Python, every “+” operation involves creating a new string and copying the old content. A frequent idiom is to use Python’s array module to modify individual characters; when you are done, use the join() function to re-create your final string.
Use Python multiple assignments to swap variables:
This is elegant and faster in Python:

x, y = y, x


This is slower:

temp = x

x = y

y = temp


  1. Use local variable if possible:

Python is faster retrieving a local variable than retrieving a global variable. That is, avoid the “global” keyword.

  1. Use “in” if possible:

To check membership in general, use the “in” keyword. It is clean and fast.

for key in sequence:

print “found”


  1. Speed up by lazy importing:

Move the “import” statement into the function so that you only use import when necessary. In other words, if some modules are not needed right away, import them later. For example, you can speed up your program by not importing a long list of modules at startup. This technique does not enhance the overall performance. It helps you distribute the loading time for modules more evenly.

  1. Use “while 1” for the infinite loop:

Sometimes you want an infinite loop in your program. (for instance, a listening socket) Even though “while True” accomplishes the same thing, “while 1” is a single jump operation. Apply this trick to your high-performance Python code.

while 1:

do stuff, faster with while 1

while True:

do stuff, slower with wile True


  1. Use list comprehension:

Since Python 2.0, you can use list comprehension to replace many “for” and “while” blocks. List comprehension is faster because it is optimized for the Python interpreter to spot a predictable pattern during looping. As a bonus, list comprehension can be more readable (functional programming), and in most cases, it saves you one extra variable for counting. For example, let’s get the even numbers between 1 to 10 with one line:

the good way to iterate a range

evens = [ i for i in range(10) if i%2 == 0]

[0, 2, 4, 6, 8]

the following is not so Pythonic

i = 0

evens = []

while i < 10:

if i %2 == 0: evens.append(i)

i += 1

[0, 2, 4, 6, 8]


  1. Use range() for a very long sequence:

This could save you tons of system memory because range() will only yield one integer element in a sequence at a time. As opposed to the range(), it gives you an entire list, which is unnecessary overhead for looping.

  1. Use Python generator to get value on-demand:

This could also save memory and improve performance. If you are streaming video, you can send a chunk of bytes but not the entire stream.

  1. Understand that a Python list is actually an array:

List in Python is not implemented as the usual single-linked list that people talk about in Computer Science. The list in Python is an array. That is, you can retrieve an element in a list using an index with constant time O(1), without searching from the beginning of the list. What’s the implication of this? A Python developer should think for a moment when using insert() on a list object. For example:>>> list.insert(0, element)

That is not efficient when inserting an element at the front, because all the subsequent index in the list will have to be changed. You can, however, append an element to the end of the list efficiently using list.append(). Pick deque, however, if you want fast insertion or removal at both ends. It is fast because the deque in Python is implemented as a double-linked list.

  1. Use a dictionary and set to test membership:

Python is very fast at checking if an element exists in a dictionary or in a set. It is because the dictionary and set are implemented using a hash table. The lookup can be as fast as O(1). Therefore, if you need to check membership very often, use a dictionary or set as your container.

mylist= [‘a’, ‘b’, ‘c’] #Slower, check membership with list:

‘c’ in mylist

True

myset = set([‘a’, ‘b’, ‘c’]) # Faster, check membership with set:

‘c’ in myset:

True


  1. Cache results with Python decorator:

The symbol “@” is Python decorator syntax. Use it not only for tracing, locking or logging. You can decorate a Python function so that it remembers the results needed later. This technique is called memoization. Here is an example:

from functools import wraps

def memo(f):

cache = { }

@wraps(f)

def wrap(*arg):

if arg not in cache: cache[‘arg’] = f(*arg)

return cache[‘arg’]

return wrap


And we can use this decorator on a Fibonacci function:

@memo

def fib(i):

if i < 2: return 1

return fib(i-1) + fib(i-2)


The key idea here is simple: enhance (decorate) your function to remember each Fibonacci term you’ve calculated; if they are in the cache, no need to calculate it again.

  1. Understand Python GIL(global interpreter lock):

GIL is necessary because CPython’s memory management is not thread-safe. You can’t simply create multiple threads and hope Python will run faster on a multi-core machine. It is because GIL will prevent multiple native threads from executing Python bytecodes at once. In other words, GIL will serialize all your threads. You can, however, speed up your program by using threads to manage several forked processes, which are running independently outside your Python code.

  1. Treat the Python source code as your documentation:

Python has modules implemented in C for speed. When performance is critical and the official documentation is not enough, feel free to explore the source code yourself. You can find out the underlying data structure and algorithm. The Python repository is a wonderful place to stick around.

Meet the Experts For better Guidence : https://nareshit.com/python-online-training/

Top comments (0)