<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: R Post</title>
    <description>The latest articles on DEV Community by R Post (@rpost).</description>
    <link>https://dev.to/rpost</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rpost"/>
    <language>en</language>
    <item>
      <title>Intro to schema.ini for Tableau users</title>
      <dc:creator>R Post</dc:creator>
      <pubDate>Tue, 07 Jan 2020 17:04:22 +0000</pubDate>
      <link>https://dev.to/rpost/intro-to-schema-ini-for-tableau-2jfg</link>
      <guid>https://dev.to/rpost/intro-to-schema-ini-for-tableau-2jfg</guid>
      <description>&lt;h1&gt;
  
  
  Stabilize my data types, please!
&lt;/h1&gt;

&lt;p&gt;I work on a reporting system that uses Tableau for the user-facing side, and has some fun Python pandas to bring data from multiple systems together in a way that makes the reports useful. One issue I was running into is that each time I updated the csv files that are pulled into Tableau, some fields would get switched to a different data type. Then the calculations based on those columns would break because of the data type change. So much fun.&lt;/p&gt;

&lt;p&gt;I searched online for solutions to the problem, and found that people use a schema.ini file to identify the columns of a file and their associated data types. Ok, cool, thanks InterWebs! &lt;/p&gt;

&lt;p&gt;Wait, how do I create one? How do I upload it to Tableau? Where should it be saved? How do I make more than one because I have multiple csvs to upload? This is where my search fell short - I couldn't find a clear source for how the heck to do the whole process when the goal is a Tableau workbook.&lt;/p&gt;

&lt;p&gt;After I had my schema.ini file up and running, I realized that the Tableau community forum has a good bit of info on it -- look there if you have any questions not covered here. I will be starting there, instead of Google, in the future!&lt;/p&gt;

&lt;h1&gt;
  
  
  What is schema.ini? Should I use it?
&lt;/h1&gt;

&lt;p&gt;It turns out that this file is used by some database types, like Microsoft JET, to identify the columns in a file as well as the data types. Microsoft JET is what Tableau uses to import your data from a text file, so it will uses the schema.ini file if it is exists. If it doesn't exist, JET will guess at your data types. &lt;/p&gt;

&lt;p&gt;You can even use schema.ini for fixed-width files by adding a Width indicator. Fixed-width files are such a pain, but some of the systems I work with are set up to only output these. Knowing that I could use them with Tableau is a super win for me!&lt;/p&gt;

&lt;p&gt;If you want to be sure that your text file, whether a csv, tsv, or fixed-width, imports the same way each time to Tableau, you probably want it.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do I create one?
&lt;/h1&gt;

&lt;p&gt;Luckily, this part was pretty easy, as there are good examples on the internet. I started with a short example from &lt;a href="https://community.tableau.com/thread/189010?_ga=2.178941156.178571609.1578345404-329779632.1563224944"&gt;a Tableau forum post&lt;/a&gt;&lt;br&gt;
and there is more info &lt;a href="https://docs.microsoft.com/en-us/sql/odbc/microsoft/schema-ini-file-text-file-driver?view=sql-server-ver15"&gt;directly from Microsoft&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find all of the possible data types you might use &lt;a href="https://en.wikibooks.org/wiki/JET_Database/Data_types"&gt;on the JET wiki&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Where do I save it? How do I upload it?
&lt;/h1&gt;

&lt;p&gt;Save it in the same folder as your text file(s). JET will auto-magically find it, so you don't need to (and can't!) upload it to Tableau.&lt;/p&gt;

&lt;h1&gt;
  
  
  What do I do about multiple files in the same folder?
&lt;/h1&gt;

&lt;p&gt;Since it's just one schema.ini file per folder, you will put the info for all of your text files in the same folder. The order doesn't seem to matter, but put the file name, followed by its column information. You cannot define the columns for multiple files together (i.e. no "*.txt"). &lt;/p&gt;

&lt;h1&gt;
  
  
  Other tips
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;This file has a strong effect! If you forget a quotation mark around a column name, you will end up with a field name that you didn't mean, even if that's not how it is in the csv file. It's no big deal, just go back and change it, and then refresh the data (at least in Tableau 10). This also means that you could change a column name in the schema file without changing it in the text file, whether you intend to or not.&lt;/li&gt;
&lt;li&gt;If you forget a column, it will not be pulled in. At first I accidentally left off the final row and it didn't appear in Tableau at all.&lt;/li&gt;
&lt;li&gt;If you have dates, you can specify their formats, too. The DateTime format is "yyyy-mm-dd hh:nn:ss" - note that it's all lowercase. You can, of course, specify a different format, such as "mm-dd-yyyy" if that's what you happen to have.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Go forth, and have stable data types!
&lt;/h1&gt;

</description>
      <category>dataanalysis</category>
      <category>tableau</category>
    </item>
    <item>
      <title>Population Growth and Housing Availability</title>
      <dc:creator>R Post</dc:creator>
      <pubDate>Fri, 27 Sep 2019 21:59:50 +0000</pubDate>
      <link>https://dev.to/rpost/population-growth-and-housing-affordability-79e</link>
      <guid>https://dev.to/rpost/population-growth-and-housing-affordability-79e</guid>
      <description>&lt;h1&gt;
  
  
  New housing and new residents
&lt;/h1&gt;

&lt;p&gt;A certain amount of housing prices are due to supply and demand - more people means more demand. It can be hard, if not impossible, to keep up with that demand, which means more expensive housing. Of course there are &lt;a href="https://www.curbed.com/2019/5/15/18617763/affordable-housing-policy-rent-real-estate-apartment"&gt;tons of factors that play into housing affordability&lt;/a&gt; but supply (new units being built) and demand (new residents moving to an area) are easy to find data that can show the impact of these factors. &lt;/p&gt;

&lt;p&gt;According to the Census estimates, the Austin-Round Rock Metropolitan area grew by 53,086 people from July 1, 2017 to July 1, 2018. A little less than a quarter, or 23.6%, of those individuals moved to live within the Austin city limits. That also means that about 40,000 people moved into the surrounding areas. During that same period, construction was completed on 12,453 housing units. Using the Census average household size of 2.48 people, that means we gained enough housing for a little over 30,000 people! This, in theory, should have helped slow down housing cost increases.&lt;/p&gt;

&lt;p&gt;Austin has been experiencing a similar growth rate since 2010, but there have not been enough housing units completed to keep up with the change - 9,449 were completed between July 1, 2018 and June 30, 2019. Using the same average household size, that made room for just 23,400 people, which means that well over half of the new residents moved to the area, but outside of the city limits. The low number of new units likely increased the cost of housing by not meeting demand.&lt;/p&gt;

&lt;p&gt;If units continue to be finished at the same rate, we will see 13,750 more units come available by June 30, 2020, which should be able to provide housing for about 34,100 new residents at the current average household size. While these are not huge numbers, and may not lower average rents it may help slow the relentless increase of housing costs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dev - estimating upcoming housing availability
&lt;/h1&gt;

&lt;p&gt;Here's why this is dev-related to me. Once you have the data, it's fairly easy to figure out how many houses have been built within a certain period. Look at the number of building permits and a status of "Final" along with their Status date. (The Status date looks to have been started in 2007, and retroactively applied through 2008, so even this method only works for about the last 10 years.) &lt;/p&gt;

&lt;p&gt;More difficult is estimating the units that may be completed soon. To do this, I first created a field called "Time to Completion" that finds the difference between the Completed Date and the Issue Date. That allowed me to looked at the average completion time in days. I narrowed the time range down to the last two years because construction methods change, crew availability changes, and I thought that two years would be a large enough time frame to have a broad average while also reflecting a state that may be similar to what we're facing now. (As I'm a developer, and not a construction or real estate professional, I might be wrong about that.)&lt;/p&gt;

&lt;p&gt;If you look at the average completion time for all Building Permits related to new housing units, from July 1, 2017 to now, it is 331.7 days. I did exclude 16 permits that were listed at taking over 2000 days to complete. That's 5 1/2 years! Something seems to have gone really wrong with those, and I'm comfortable calling them outliers for our current purposes. That may seem like a good number to use, but if you think for a second longer - does it take the same amount of time to build a 3,000 unit complex as a single house? I hope not! I looked at the relationship between number of units and time to completion and, yes! Common sense works here, it takes longer to build more units. You can see it in a &lt;a href="https://public.tableau.com/profile/rebekah3261#!/vizhome/AustinTXUnitCompletionEstimates/EstimatingUnitCompletion?publish=yes"&gt;dashboard I made on Tableau Public&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I added a line to my scatter plot with Housing Units and Time to Completion. We could use the nice formula that Tableau generated based on number of units alone, but the line goes way above the reality for larger complexes, which means we would seriously over-estimate completion time for larger projects, like those with more than 150 units. What else might be at play? Next I checked the time to completion by permit class and wow! Now we see a range from 297.5 days to 989.9. That's from about 10 months at the shortest to over two and a half years! That seems like a better estimate.&lt;/p&gt;

&lt;p&gt;After exploring these options, I decided to use the average Time to Completion by Permit Class. I created a new field that calculates an Estimated Completion Date based on the Issue Date plus the average Time to Completion for a given permit's Permit Class. If you are a more visual person, go download &lt;a href="https://public.tableau.com/profile/rebekah3261#!/vizhome/AustinTXUnitCompletionEstimates/EstimatingUnitCompletion?publish=yes"&gt;my Tableau workbook&lt;/a&gt; and play with it! &lt;/p&gt;

&lt;p&gt;The final step was to account for &lt;em&gt;actual completed date&lt;/em&gt; and &lt;em&gt;estimated completed date&lt;/em&gt; at the same time. I created one more field that looks at the project status, decides what date to use (actual or estimate) and then allows us to view all of the building permits together, whether they have been completed yet or not.&lt;/p&gt;

&lt;h1&gt;
  
  
  Should we ever expect housing costs in Austin to go down?
&lt;/h1&gt;

&lt;p&gt;If more people are moving to Austin and the area every year, and are increasing the demand for housing, then we can never really expect the housing costs to go down as long as the economy is doing well. It's awkward, but what incentive do housing developers have to make houses that will sell for less? Apartments that command lower rents may be profitable to management companies because they could tap a separate market, but at some point the profit may not be worth it to those companies. That is where local government steps in - they can (and do!) create incentives for developers to build affordable units.&lt;/p&gt;

&lt;p&gt;A related issue is land. You can build on empty land, or you can tear down existing units. The former is very limited within city limits, and the latter often results in simple replacement of older (read: cheaper) housing with newer (read: more expensive) housing. Changing the land development code is one way to encourage greater density, which will allow for more housing within city limits. Existing options, like two detached homes on a single lot, are becoming more popular as a way for those willing to live in smaller homes to stay within the city. This, then is another area where local government can help control housing costs. And we'll get into that next time.&lt;/p&gt;

</description>
      <category>data</category>
      <category>analytics</category>
      <category>techforgood</category>
    </item>
    <item>
      <title>Housing affordability in Austin, TX</title>
      <dc:creator>R Post</dc:creator>
      <pubDate>Fri, 30 Aug 2019 18:38:45 +0000</pubDate>
      <link>https://dev.to/rpost/housing-affordability-in-austin-tx-1ne2</link>
      <guid>https://dev.to/rpost/housing-affordability-in-austin-tx-1ne2</guid>
      <description>&lt;h1&gt;
  
  
  The Motivation
&lt;/h1&gt;

&lt;p&gt;I live in a neighborhood of Austin with a lot of renovations and construction happening. I know from personal experience that the city permitting department has an online search - we did renovations, too. I realized that they also have a public API, so I decided to explore whether I could map the active building permits in my area and get a better understanding of how the neighborhood is changing. &lt;/p&gt;

&lt;h1&gt;
  
  
  The Situation
&lt;/h1&gt;

&lt;p&gt;It turns out that the city's dataset is an amazingly rich source of data! I quickly saw what I was interested in for my own neighbordhood, and realized that there are implications within the region as well. Many sources will tell you that Austin is one of the fastest growing metro regions in the country (&lt;a href="http://austin.culturemap.com/news/city-life/02-26-19-austin-area-population-in-2019-growth-rate-reports-demographer/"&gt;1&lt;/a&gt;, &lt;a href="https://www.opendatanetwork.com/entity/310M200US12420/Austin_Metro_Area_TX/demographics.population.count?year=2017"&gt;2&lt;/a&gt;, &lt;a href="https://www.bizjournals.com/austin/news/2019/04/18/as-some-big-cities-lose-residents-austin-is-adding.html"&gt;3&lt;/a&gt;, among others.) One downside of this growth is that housing is becoming less and less affordable as more people come in and increase demand. Many musicians, an important demographic for the "Live Music Capital of the World," are forced to move out of the city proper, along with students of the 5 higher ed institutions, and even many 2-income households cannot afford to stay within the city limits. Life further out has a lower housing cost, but requires more time in traffic, creates more congestion, and takes up time that most of us would rather spend on literally anything else.&lt;/p&gt;

&lt;p&gt;The city is aware of the affordability problem and is &lt;a href="https://data.austintexas.gov/stories/s/Household-Affordability/czit-acu8/"&gt;up-front about where things stand.&lt;/a&gt; Two of the key indicators for affordability that we're getting worse at are Median House Value and Median Gross Rent. Both of these are affected by the number of housing units available. If housing costs are going to go down, there needs to be more units available where people want to live. At the base, it's supply and demand - when we don't have enough units, owners and sellers can charge more. When we have plenty, they have to charge less. &lt;/p&gt;

&lt;h1&gt;
  
  
  The dev connection
&lt;/h1&gt;

&lt;p&gt;"Wait, wait," you say, "Ok, I can see that this is &lt;strong&gt;A Thing&lt;/strong&gt;, but why should I, as a developer, care?" &lt;/p&gt;

&lt;p&gt;First, I hope that you care because you don't like to see other humans struggle to earn enough for food and housing.&lt;/p&gt;

&lt;p&gt;Second, Austin is a hot location for tech jobs, and &lt;a href="https://www.kut.org/post/how-wave-tech-expansion-could-further-strain-affordability-austin"&gt;our relatively high salaries help push up housing prices.&lt;/a&gt; The median household income here, &lt;a href="https://austin.curbed.com/2019/4/15/18311751/austin-rent-median-affordable-wages-salaries-burdened"&gt;at a little under 66K according to some estimates,&lt;/a&gt; is less than many tech salaries. While we can handle the cost of living, we are pulling up the median salary and making it harder for others. I don't think that's bad -  I enjoy making a decent salary! I do think that we should be aware of the impact of our field, and that not everyone else can earn similar wages.&lt;/p&gt;

&lt;p&gt;Third, we know how to use APIs and for that reason we can do something with the available data. Those of us who can present the numbers and break them down in visualizations can help make sense of the numbers that represent a pretty complicated problem.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Question
&lt;/h1&gt;

&lt;p&gt;This brings us to a measurable question: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If we compare the number of housing units being built in the city to the number of people moving to the area, how many newcomers can find a place within the city, and how many will be fighting traffic from the surrounding areas?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, do we have a chance to get ahead of this affordability problem any time soon?&lt;/p&gt;

&lt;h1&gt;
  
  
  The Plan
&lt;/h1&gt;

&lt;p&gt;In order to determine if the ratio of new-people to new-housing favors lower housing costs, we need to determine the expected number of new people moving to the area, and what we can expect for new housing units. The population estimates are fairly straight-forward to find, so I'll work on the housing ones.&lt;/p&gt;

&lt;p&gt;Until that's done, feel free to check out the map of active residential permits in Austin, TX as of 8/20/19: &lt;a href="https://public.tableau.com/profile/rebekah3261#!/vizhome/NewUnits/ActivePermitUnits"&gt;https://public.tableau.com/profile/rebekah3261#!/vizhome/NewUnits/ActivePermitUnits&lt;/a&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>analytics</category>
      <category>techforgood</category>
      <category>visualization</category>
    </item>
    <item>
      <title>Django with an Oracle Legacy DB</title>
      <dc:creator>R Post</dc:creator>
      <pubDate>Thu, 22 Aug 2019 22:25:17 +0000</pubDate>
      <link>https://dev.to/rpost/django-with-an-oracle-legacy-db-45gl</link>
      <guid>https://dev.to/rpost/django-with-an-oracle-legacy-db-45gl</guid>
      <description>&lt;h1&gt;
  
  
  The problem
&lt;/h1&gt;

&lt;p&gt;I am working on a project that requires connecting an Oracle legacy database to a Django project. I found the steps slowly, and in a variety of different places, so I am bringing them together in case someone else has a similar problem in the future!&lt;/p&gt;

&lt;h1&gt;
  
  
  The basics
&lt;/h1&gt;

&lt;p&gt;Getting started with a database connection in Django is laid out &lt;a href="https://docs.djangoproject.com/en/2.2/ref/databases/"&gt;in the docs&lt;/a&gt;. There you will find how to set up the database info in your settings.py file. &lt;strong&gt;Remember that your password should NOT go into the settings.py file that you commit to your repo.&lt;/strong&gt; There are several options for storing passwords, and I use a local_settings.py file. In my &lt;code&gt;settings.py&lt;/code&gt; file, my DATABASES look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.oracle',
        'NAME': 'xe',
        'USER': 'a_user',
        'PASSWORD': '',
        'HOST': '',
        'PORT': '',
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Notice that the password is an empty string. Then my &lt;code&gt;local_settings.py&lt;/code&gt; includes the password:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.oracle',
        'NAME': 'xe',
        'USER': 'a_user',
        'PASSWORD': 'a_password',
        'HOST': '',
        'PORT': '',
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The Django docs mention &lt;code&gt;tnsnames.ora&lt;/code&gt;, and this is how I connect to my legacy database because it lets you use a service name. However, the docs don't say WHERE in the project structure &lt;code&gt;tnsnames.ora&lt;/code&gt; goes. Using the structure that's standard since Django 1.4, I put mine here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my_project
├── manage.py
├── my_app
│   ├── [...]
└── my_project
    ├── __init__.py
    ├── _batch_settings.py
    ├── settings.py
    ├── urls.py
    └── oracle           &amp;lt;~~~ Create this folder!
        └── tnsnames.ora &amp;lt;~~~ Create this file!
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;I read that Oracle can be super picky about line endings, so I created my tnsnames.ora file in my vm. I'm spoiled because my org has a standard vm to use, so I am sorry if you can't do this easily and have line-ending trouble. &lt;/p&gt;

&lt;p&gt;Now that you've made the new folder and file, tell Django where it is. In your &lt;code&gt;settings.py&lt;/code&gt; file, you should set your &lt;code&gt;TNS_ADMIN&lt;/code&gt; environment variable to point to the oracle directory created within your project folder:&lt;br&gt;
&lt;code&gt;os.environ['TNS_ADMIN'] = os.path.join(BASE_DIR, '&amp;lt;project_name&amp;gt;', 'oracle')&lt;/code&gt;  &lt;/p&gt;
&lt;h1&gt;
  
  
  Recognize the DB
&lt;/h1&gt;

&lt;p&gt;Ok, so the basics are ready! As someone who hadn't started a Django app with a legacy DB ever, I was immediately stuck again. But the next steps are the same as for any database. Since I was not wanting to set up a new database, I wasn't sure that I needed to do these, but it is necessary.  Go over to the terminal/command line and:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python manage.py makemigrations &amp;lt;appname&amp;gt;
$ python manage.py migrate &amp;lt;appname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After each of these commands, the terminal window shows you the progress on each step. &lt;/p&gt;

&lt;p&gt;Now if you are lucky and you have tables or views you need in your schema, you can use the &lt;code&gt;inspectdb&lt;/code&gt; command to give you a good start on your models. In fact, this is the &lt;a href="https://docs.djangoproject.com/en/2.2/howto/legacy-databases/"&gt;recommended way to integrate with a legacy DB in the docs.&lt;/a&gt; Skip ahead to testing if &lt;code&gt;inspectdb&lt;/code&gt; works for you. However, if your schema does &lt;strong&gt;not&lt;/strong&gt; contain any tables or views and exists solely to give you access to tables within another schema, our princess is in another castle. Let's go find her.&lt;/p&gt;

&lt;h1&gt;
  
  
  The next castle
&lt;/h1&gt;

&lt;p&gt;Without the help of &lt;code&gt;inspectdb&lt;/code&gt;, you need to write up a little model of your own before you can be sure that you are properly connected. This can be quite short if you want to start with something small for testing. I would recommend including at least two fields: one to query based on, and another to prove you got what you wanted. You will also need to include &lt;code&gt;class Meta&lt;/code&gt; in order to tell the Django ORM which schema and table you want to query. A basic model can be quite short:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class Student(models.Model):
    id = models.IntegerField(primary_key=True)
    name = models.CharField(max_length=50, blank=True, null=True)

    class Meta:
        managed = False
        db_table = '"&amp;lt;SCHEMA_NAME&amp;gt;"."&amp;lt;TABLE_NAME&amp;gt;"'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Pay attention to all of those single and double quotation marks for the db_table - Oracle is very dumb and can only find the schema and table if you tell it in exactly the right way. Now that you have a new model, re-run your &lt;code&gt;python manage.py makemigrations &amp;lt;appname&amp;gt;&lt;/code&gt;. This will make sure Django knows about your new models. Follow it up with &lt;code&gt;python manage.py migrate &amp;lt;appname&amp;gt;&lt;/code&gt; as before. Yes, you need to do both.&lt;/p&gt;

&lt;h1&gt;
  
  
  Testing that our connection can get data
&lt;/h1&gt;

&lt;p&gt;Ok! We have our &lt;code&gt;DATABASES&lt;/code&gt; entry, our &lt;code&gt;tnsnames.ora&lt;/code&gt; file and something in &lt;code&gt;models.py&lt;/code&gt;. Let's go to the shell!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python manage.py shell
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We are now in the magical django-infused shell environment. You'll need to tell the shell where to find your model, and then retrieve some data within it. In order to be sure to retrieve something, look for a row that you know is in your data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In[1]: from &amp;lt;appname&amp;gt;.models import *
In[2]: test = Student.objects.get(id=&amp;lt;id that really exists&amp;gt;)
In[3]: test.name
     &amp;lt;you should see the correct related field info here!&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If instead of beautiful data you see &lt;code&gt;NameError: name 'Students' is not defined&lt;/code&gt; then you probably either 1) typoed the class name for your model or 2) did not run the migration commands as above.&lt;/p&gt;

&lt;h1&gt;
  
  
  Go forth and conquer
&lt;/h1&gt;

&lt;p&gt;Celebrate your success! Now that you have proven that your Django project is connected to your Oracle legacy database, go get that data and do whatever you really need to be doing! If you had to write the initial testing model, you will need to go write the rest of the models you need by hand. Not fun, and more error prone, but I haven't found a way around it. I would love to be wrong, though, so let me know if you have a solution!&lt;/p&gt;

&lt;p&gt;If this is your first time pulling data through a Django model, take some time to get comfortable with the &lt;a href="https://docs.djangoproject.com/en/2.2/topics/db/"&gt;Django ORM (object-relational mapping layer)&lt;/a&gt;. It's a fancy way to say "learn how to use that cool model you just generated or wrote." That's where I'm headed next.&lt;/p&gt;

</description>
      <category>python</category>
      <category>django</category>
      <category>oracle</category>
      <category>data</category>
    </item>
  </channel>
</rss>
