<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Agrover112</title>
    <description>The latest articles on DEV Community by Agrover112 (@agrover112).</description>
    <link>https://dev.to/agrover112</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/agrover112"/>
    <language>en</language>
    <item>
      <title>Mistakes to Avoid During Python Library Creation</title>
      <dc:creator>Agrover112</dc:creator>
      <pubDate>Mon, 14 Nov 2022 04:53:40 +0000</pubDate>
      <link>https://dev.to/agrover112/mistakes-to-avoid-during-python-library-creation-13j4</link>
      <guid>https://dev.to/agrover112/mistakes-to-avoid-during-python-library-creation-13j4</guid>
      <description>&lt;h2&gt;
  
  
  Mistakes to Avoid During Python Library Creation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ieUJmQJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7900/0%2AJJikCfSit2Gc21Sb" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ieUJmQJG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7900/0%2AJJikCfSit2Gc21Sb" alt="Photo by [Michael Dziedzic](https://unsplash.com/@lazycreekimages?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)" width="880" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The creation of a Python library can be daunting. If you haven’t already created one then I’m sure you will end up making some mistakes, unless you come across some good advice.&lt;/p&gt;

&lt;p&gt;Luckily for the reader, I have listed some irritating but common issues and technical problems that you might encounter and how you can take care not to repeat them, and thus overcome them.&lt;/p&gt;

&lt;p&gt;There are many mistakes I made while making a library, unaware of the consequences I might face regarding the overall design as well as the hindrance to productivity!&lt;/p&gt;

&lt;p&gt;Why do I stress and list these out? Because your design decisions can have a huge impact on your library creation. In the following lines, we will look at the common issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Choose a Name&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Choosing a name is hard, especially if you are going to open-source it! Good luck finding an &lt;strong&gt;unused repo name on GitHub&lt;/strong&gt; and an &lt;strong&gt;unused package name on PyPi&lt;/strong&gt;, seriously! I had to change the entire package name to something else because I was a few days late.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Set Up a Git Workflow at the Earliest&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Good luck trying to manage a new feature or fixing a bug without having a Git setup. It just makes your life easier. Seriously if you aren’t using git &lt;em&gt;oh boi, I wish you luck, you are going to learn it the **hard way.&lt;/em&gt;*&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Using Conventional Commits&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Conventional Commits can help you identify which commit does what, and also can help when you are using &lt;strong&gt;Semantic Versioning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Semver is simple tbh. 3 digits, &lt;strong&gt;x.y.z ** one for  **MAJOR, MINOR, PATCH&lt;/strong&gt;. With any commit using &lt;strong&gt;‘fix:’&lt;/strong&gt;, should increment the &lt;strong&gt;z, *&lt;em&gt;any *&lt;/em&gt;‘feat:’&lt;/strong&gt; usually updated the &lt;strong&gt;y&lt;/strong&gt;, and any major change should end up in &lt;strong&gt;x.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;More of the above can be read from here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.conventionalcommits.org/en/v1.0.0-beta.2/"&gt;Conventional Commits&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://semver.org/"&gt;Semantic Versioning 2.0.0 | Semantic Versioning (semver.org)&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Chose a Design Pattern&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Chose a design pattern at the earliest and save time converting modular code to OOP patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Use Multiple Python Environments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Yes, create multiple Python environments for testing the library’s compatibility on different Python versions! A simple trick is just to test it out on Google Collab for example! Be it a conda or virtualenv!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html"&gt;Managing environments — conda 4.10.3.post40+c1579681 documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/52816156/how-to-create-virtual-environment-for-python-3-7-0"&gt;How to create a virtual environment for python 3.7.0? — Stack Overflow&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;strong&gt;Use Pre-Commits&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I wish I knew about git-hooks earlier, it can save so much time by performing formatting, checking for commit messages, proper doc-strings, sorting imports. Just grunt work, which can be automated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GH50lMd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7666/1%2A15kEo8W7oBeSPW9xZlnPzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GH50lMd4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7666/1%2A15kEo8W7oBeSPW9xZlnPzg.png" alt="Image by Khuyen Tran" width="880" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Highly recommend you to check this Medium post by &lt;a href="https://dev.toundefined"&gt;Khuyen Tran&lt;/a&gt; out for more on how to set up pre-commits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/4-pre-commit-plugins-to-automate-code-reviewing-and-formatting-in-python-c80c6d2e9f5"&gt;4 pre-commit Plugins to Automate Code Reviewing and Formatting in Python | by Khuyen Tran | Towards Data Science&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Use &lt;strong&gt;NumPy/Jax.NumPy/torch&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Using a list is a bad choice. NumPy is a lot faster than default Python lists, because NumPy is based in C and uses BLAS, and other libraries under the hood, moreover it makes your life easier to perform matrix operations using lists that are inefficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Decide Soon Whether to Support &lt;strong&gt;Cython/PyPy/Numba&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Although &lt;strong&gt;PyPy&lt;/strong&gt; supports Python out of the box, it only supports &lt;strong&gt;3.7.1&lt;/strong&gt; at the time of writing and can lead to some missing features in standard Python libraries. Numba again will force you to adopt a certain coding style with its decorators, and other issues which may pop up due to @&lt;strong&gt;jit, @nojit.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Upon further testing, the Numba decorators don’t speed up my code; however, that’s also down to the design decisions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cython.org/"&gt;Cython: C-Extensions for Python&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://numba.pydata.org/"&gt;Numba: A High-Performance Python Compiler (pydata.org)&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  9. &lt;strong&gt;Avoid Multi-threading Due to GIL&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;10&lt;/strong&gt;. Make the entire pattern and mathematical library centered around the crucial decision as to whether you want to support multiprocessing! Your code structure including new feature should adapt well to this pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  11. Use gc or del or delete Unwanted Objects
&lt;/h3&gt;

&lt;h3&gt;
  
  
  12. &lt;strong&gt;Instantiate the self.parameter&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Carefully instantiating these parameters can prevent your locals from clashing with global (namespace collisions), and thus, you are assured there is no weird behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  13. &lt;strong&gt;Write Unit-tests or pytests&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This can again save a lot of time by just running tests and being assured your code runs.&lt;/p&gt;

&lt;p&gt;Also check the &lt;strong&gt;output of each return value, despite automated tests&lt;/strong&gt;. Sometimes you might have to define a test, but you need to identify what are the possible sources of bugs, which may be only possible by running multiple tests on different values or doing it manually. This can be done by writing examples notebook on Colab and testing different things out!&lt;/p&gt;

&lt;h3&gt;
  
  
  14. Polish Current Features instead of Obsessing over New Features
&lt;/h3&gt;

&lt;p&gt;I’m guilty of doing this, probably learning this as we speak, adding features isn’t bad but needs to be a fine balance, you shouldn’t get carried away!&lt;/p&gt;

&lt;p&gt;Do not obsess over adding new features when your current features aren’t polished enough, adding new features can then aggravate you’re already existing issues and bugs. Good luck with trying to solve the same bugs across all the features you implemented.&lt;/p&gt;

&lt;p&gt;And that’s it. Thank you for reading.&lt;/p&gt;

&lt;p&gt;Ciao!&lt;/p&gt;

&lt;p&gt;Stalk me on Twitter! 👀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/agrover112"&gt;Agrover112 (@agrover112) / Twitter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or see my garbage code on GitHub 💻:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Agrover112"&gt;Agrover112 (github.com)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The package I created that I talk about:&lt;/p&gt;

&lt;p&gt;Agrover&lt;a href="https://github.com/Agrover112/fliscopt"&gt;112/fliscopt: Algorithms for flight scheduling optimization. (github.com)&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Highlights from Neuromatch 4.0 Conference</title>
      <dc:creator>Agrover112</dc:creator>
      <pubDate>Fri, 14 Jan 2022 06:19:43 +0000</pubDate>
      <link>https://dev.to/agrover112/highlights-from-neuromatch-40-conference-mob</link>
      <guid>https://dev.to/agrover112/highlights-from-neuromatch-40-conference-mob</guid>
      <description>&lt;h2&gt;
  
  
  Highlights from Neuromatch 4.0 Conference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hOWx_NhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQFSv54noOpnD-ZmYfe_lXQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hOWx_NhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQFSv54noOpnD-ZmYfe_lXQ.png" alt="Neuormatch 4.0 Conference" width="471" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few interesting papers and posters at the Neuromatch 4 Conference (held in December 2021), with topics at the intersection of Neuroscience and AI.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="http://How%20Do%20Dendritic%20Properties%20Impact%20Neural%20Computation%20NMC40%20Ilenna%20Jones%E2%80%8A-%E2%80%8AYouTube"&gt;How Do Dendritic Properties Impact Neural Computation&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8RsRHTIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7644/1%2ArTk8GFUZFItyfOKUhfikRA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8RsRHTIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7644/1%2ArTk8GFUZFItyfOKUhfikRA.png" alt="How Do Dendritic Properties Impact Neural Computation" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the first talk by authors Ilena Jones and Konrad Kording, dendritic processes (forming a tree-like structure) are used to solve machine learning tasks as opposed to simple connection weights. It’s interesting to see how modelling dendrites and synapses with constraints can solve machine learning tasks.&lt;/p&gt;

&lt;p&gt;2.&lt;a href="https://www.youtube.com/watch?v=MkAHBgt3d2w"&gt;Model dimensionality scales with the performance of deep learning models for biological vision&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NnxT8kRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7640/1%2AUJfSO53jHuH-PCjT_AwTnA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NnxT8kRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7640/1%2AUJfSO53jHuH-PCjT_AwTnA.png" alt="Model dimensionality scales with the performance of deep learning models for biological vision" width="880" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this paper, authors Eric Elmoznino and Michael Bonner discuss how scaling the &lt;a href="https://datascience.stackexchange.com/questions/5694/dimensionality-and-manifold"&gt;manifold&lt;/a&gt;dimensionality increases the performance for vision related tasks, as opposed to various theories that the brain tries to represent information in a really compressed format.&lt;/p&gt;

&lt;p&gt;3.&lt;a href="https://www.youtube.com/watch?v=J6r2LpKm7jg"&gt;Visualization and Graph Exploration of the Fruit Fly Brain Datasets with NeuroNLP++&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CfeVIGDj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7680/1%2AOj9FYXojGyIek7AJzKAlRQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CfeVIGDj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7680/1%2AOj9FYXojGyIek7AJzKAlRQ.png" alt="NeuroNLP++" width="880" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NeuralNLP++ is an improvement over NeuralNLP which uses rule-based NLP methods for searching various queries related to Fruit Fly Brain. This is not helpful since questions can be of any form and can be of any form besides the restricted usage. In this presentation, Dense Passage Retrieval and PubMedBERT are utilized to query a collection of ontology terms and their descriptions. This enables visualization of these queries in the 3D brain animation.&lt;/p&gt;

&lt;p&gt;4.&lt;a href="https://www.youtube.com/watch?v=eIltg5BeQG8"&gt;A large scale computational model of visual word recognition and its comparison to MEG data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X2Q-Frvy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7606/1%2A3TEwD4AzYAcihLa1n9IWeQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X2Q-Frvy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7606/1%2A3TEwD4AzYAcihLa1n9IWeQ.png" alt="MEG Data" width="880" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this talk, Marijn van Vliet presents a baseline model to map onto the neural signatures observed during neuroimaging studies. He presents how neural activity in the brain regions respond similarly to the deep layers of a convolutional-based network and presents it as a baseline.&lt;/p&gt;

&lt;p&gt;5.&lt;a href="https://www.youtube.com/watch?v=eD97sw3MlwA"&gt;NMC4: Mehdi Orouji-Distilling low-dimensional representations of data using TRACE model&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mZrvfGCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7254/1%2ABmnxN48__AwXTGr35CmuXg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mZrvfGCB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7254/1%2ABmnxN48__AwXTGr35CmuXg.png" alt="" width="880" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this talk, authors Mehdi Orouji and Megan Peters propose a task relevant, non-parametric model called TRACE (Task-relevant Autoencoder via Classifier Enhancement). Using TRACE, the authors show how to create a non-parametric, task relevant model which can encode more task relevant features in its bottleneck layer encoding. In terms of outperforming reconstruction performance, TRACE is superior to vanilla autoencoders .&lt;/p&gt;

&lt;p&gt;6.&lt;a href="https://www.youtube.com/watch?v=mJ0E4nQmMGo"&gt;NMC4 — Hung Lo — Binge eating suppresses flavor representations in the mouse olfactory cortex&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0IvUdsne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7096/1%2A1VO3SxCZK3-ImwfeYqYegQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0IvUdsne--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7096/1%2A1VO3SxCZK3-ImwfeYqYegQ.png" alt="" width="880" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this talk, Hung Lo tests the following hypothesis: does binge eating show a causal relationship with neuronal activity in the olfactory cortex? As evidence, the author shows how inhibitory neurons exhibit similar neuronal activity after repeated trials, and how serotonin production from the dorsal raphe neurons cannot be responsible for suppression phenomena.&lt;/p&gt;

&lt;p&gt;7.&lt;a href="https://www.youtube.com/watch?v=b_x_XTHkRoE"&gt;Do rats see like us? The importance of contrast features in rat vision — YouTube&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1PUHMyY3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7674/1%2ACAMhgebBlqG1byOpfE1JGw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1PUHMyY3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7674/1%2ACAMhgebBlqG1byOpfE1JGw.png" alt="" width="880" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anna Elizabeth Schnell and Hans Op de Beeck consider whether contrast features are enough to explain facial recognition tasks in mice.&lt;/p&gt;

&lt;p&gt;8.&lt;a href="https://arxiv.org/abs/2109.13610"&gt;Rank similarity filters for computationally-efficient machine learning on high dimensional data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j_hs6Z5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7408/1%2Aq7yLRRBRTIn9OecRFAJldQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j_hs6Z5w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7408/1%2Aq7yLRRBRTIn9OecRFAJldQ.png" alt="" width="880" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This talk presents a new technique which is essentially a non-parametric neural network of a single layer. This is similar to how an SVM is an approximation of an MLP. It uses the concept of filters: each filter captures some information from &lt;em&gt;N&lt;/em&gt; dimensional inputs, while its weights are decided by using rank or “confusion” techniques. Using this method, one can transform any dataset into a linearly separable form. The resulting classifier can also be used to reduce the dimensionality of data and leads to a 10x performance enhancement over traditional SVMs and KNNs. This enhancement includes maintaining accuracy across large datasets.&lt;/p&gt;

&lt;p&gt;9.&lt;a href="https://www.youtube.com/watch?v=4ex7c6rPE88"&gt;An investigation of the relationship between extroversion and brain laterality&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bgowpbNu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7282/1%2AnYiMqypOsTJCaVBZbe5NLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bgowpbNu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7282/1%2AnYiMqypOsTJCaVBZbe5NLw.png" alt="" width="880" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on data from previous research data, authors Amirreza Farbakhsh and Tarannom Taghavi show that there is a relationship between extroversion and laterality of the brain. Laterality refers to how cognitive functions such as tasks of language acquisition, articulation or motor control, or emotional control originate in either the right or left hemisphere of the brain. The authors measure laterality [1] as the average over the difference in voxel activation between two hemispheres. They used the t-test along with NEO-FFI scores to find correlations to their laterality scores.&lt;/p&gt;

&lt;p&gt;10.Where is all the non-linearity? (Video not Available)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BcX3CRhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7668/1%2AH6dFJDqqjld9zXw9tnIJsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BcX3CRhD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7668/1%2AH6dFJDqqjld9zXw9tnIJsw.png" alt="" width="880" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The author proposes the RNN PSID (Recurrent Neural Network Preferential Subspace Identification) model. RNN PSIDs use the latent states from neuronal activations for behavioral decoding purposes, and also is helpful for better causal decoding. The non-linearity of the RNN parameters is sufficient in capturing such non-linear dynamics from LFP and spiking signals.&lt;/p&gt;

&lt;p&gt;11.&lt;a href="https://www.youtube.com/watch?v=xFvV2ltMB_s"&gt;Mouse SST and VIP Interneurons in V1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bDWphuqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4594/1%2AXZZwcpVd7DoN5QrlguAUFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bDWphuqs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4594/1%2AXZZwcpVd7DoN5QrlguAUFg.png" alt="Mouse SST and VIP Interneurons in V1" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As someone who has &lt;a href="https://github.com/seungjaeryanlee/nma-cn-project"&gt;worked on analyzing V1 before[2]&lt;/a&gt;, I was especially interested in this work. The authors inquire whether SST or VIP interneurons respond more strongly to novel visual stimuli. They use statistical ML techniques with 8-fold cross validation to predict the stimulus response.&lt;/p&gt;

&lt;p&gt;12.&lt;a href="https://www.youtube.com/watch?v=aTqgPbKQUD8"&gt;Universal Theory of Switching&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--opcGchVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7678/1%2AwDEwwYrDQD4Wpvdwg4vk4Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--opcGchVW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/7678/1%2AwDEwwYrDQD4Wpvdwg4vk4Q.png" alt="Universal Theory of Switching" width="880" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This poster is interesting for anyone who finds concepts of attention mechanisms, Genetic algorithms interesting. In this poster the authors present an interesting idea as to how, switching mechanisms are responsible for improvements in biological organisms as well as in simulated beings. They take this idea to an extreme by implementing “switching” &lt;a href="https://github.com/Agrover112/fliscopt"&gt;within an algorithm&lt;/a&gt; (Inter algorithmic switching) to between two diff algorithms (Intra algorithmic switching).&lt;/p&gt;

&lt;h2&gt;
  
  
  Ciao!
&lt;/h2&gt;

&lt;p&gt;Before you leave make sure to follow on Twitter as &lt;a href="http://Agrover112%20(@agrover112)%20/%20Twitter"&gt;@agrover112&lt;/a&gt; for more content on topics such as Computational Neuroscience, Artificial Intelligence, and more!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;[1]&lt;a href="https://www.simplypsychology.org/brain-lateralization.html"&gt;Lateralization of Brain Function | Simply Psychology&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2]&lt;a href="https://github.com/seungjaeryanlee/nma-cn-project"&gt;seungjaeryanlee/nma-cn-project: Project for Neuromatch Academy: Computational Neuroscience (github.com)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>computerscience</category>
      <category>deeplearning</category>
      <category>neuroscience</category>
    </item>
  </channel>
</rss>
