<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jisc</title>
    <description>The latest articles on DEV Community by Jisc (@jisc).</description>
    <link>https://dev.to/jisc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jisc"/>
    <language>en</language>
    <item>
      <title>AI can be DIY – a Raspberry Pi powered “seeing eye”</title>
      <dc:creator>Martin Hamilton</dc:creator>
      <pubDate>Sat, 16 Sep 2017 16:06:07 +0000</pubDate>
      <link>https://dev.to/jisc/ai-can-be-diy--a-raspberry-pi-powered-seeing-eye-5b1</link>
      <guid>https://dev.to/jisc/ai-can-be-diy--a-raspberry-pi-powered-seeing-eye-5b1</guid>
      <description>&lt;p&gt;Scarcely a day goes by right now without a breathless newspaper headline about how artificial intelligence (AI) is going turn us all into superhumans, if it doesn’t end up replacing us first. But what do we really mean by AI, and what could we do with it? In this post I’ll take a look at the state of the art, and how you could build your own Do-It-Yourself “seeing eye” AI using a cheap &lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi&lt;/a&gt; computer and some free software from Google called &lt;a href="http://www.tensorflow.org" rel="noopener noreferrer"&gt;TensorFlow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to have a go at doing this yourself, I’m following a brilliant &lt;a href="https://planb.nicecupoftea.org/2016/11/11/a-speaking-camera-using-pi3-and-tensorflow/" rel="noopener noreferrer"&gt;step by step guide&lt;/a&gt; produced by Libby Miller from BBC R&amp;amp;D. I should also note that this is possible because of Sam Abrahams, who got &lt;a href="https://github.com/samjabrahams/tensorflow-on-raspberry-pi" rel="noopener noreferrer"&gt;TensorFlow working on the Raspberry Pi&lt;/a&gt;, and also literally &lt;a href="http://www.samabrahams.com/" rel="noopener noreferrer"&gt;wrote the book on TensorFlow&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FIMG_20170519_153804-300x225.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FIMG_20170519_153804-300x225.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Raspberry Pi logo screen printed onto motherboard&lt;/p&gt;

&lt;p&gt;AI right now is mainly focussed pattern recognition – in still and moving images, but also in sounds (recognising words and sentences) and text (this text is written in English). A good example would be the Raspberry Pi logo in the picture above. Even thought it’s a little blurred and we can’t see the whole thing, most people would recognise that the picture included some kind of berry. People who were familiar with the diminutive low cost computer would be able to identify that the picture included the Raspberry Pi logo almost instantly – and the circuit board background might help to jog their memory.&lt;/p&gt;

&lt;p&gt;While we talk about “intelligence”, the truth is that this pattern recognition is pretty dumb. The intelligence, if there is any, is supplied by a human being adding some rules that tell the computer what patterns to look out for and what to do when it matches a pattern. So let’s try a little experiment – we’ll attach a camera and a speaker to our Raspberry Pi, teach it to recognise the objects that the camera sees, and tell us what it’s looking at. This is a very slow and clunky low tech version of the &lt;a href="http://www.orcam.com/" rel="noopener noreferrer"&gt;OrCam&lt;/a&gt;, a new invention helping blind and partially sighted people to live independently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FScreen-Shot-2017-09-16-at-16.17.58-1024x478.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FScreen-Shot-2017-09-16-at-16.17.58-1024x478.png" alt="Our Raspberry Pi powered "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Raspberry Pi powered “seeing eye” AI&lt;/p&gt;

&lt;p&gt;The Raspberry Pi uses very little electricity, so you can actually run it off a battery, although it’s not as portable or as sleek as the OrCam! And rather than a speaker, you could simply plug a pair of headphones in – but the speaker makes my demo video work better. I used a Raspberry Pi model 3 (Â£25) and an official Raspberry Pi camera (Â£29). If you’re wondering what the wires are for, this is my cheap and cheerful substitute for a shutter release button for the camera, using the Raspberry Pi’s General Purpose Input Output (GPIO) connector. GPIO makes it easy to connect all kinds of hardware and expansion boards to your Raspberry Pi.&lt;/p&gt;

&lt;p&gt;So that’s the hardware – what about the software? That’s the real brains of our AI…&lt;/p&gt;

&lt;p&gt;Google’s TensorFlow is an open source machine learning system. Machine learning is the technology that underpins most modern AI systems, and it’s responsible for the pattern recognition I was talking about just now. Google took the bold step of not just making TensorFlow freely available, but also giving everyone access to the source code of the software by making it “open source”. This means that developers all over the world can (and do) enhance it and share their changes.&lt;/p&gt;

&lt;p&gt;The catch with machine learning is that you need to feed your AI lots of example data before it’s able to successfully carry out that pattern recognition I was talking about. Imagine that you are working with a self-driving car – before it can be reasonably sure what a cat running out in front of the car looks like, the AI will need training. You would typically do this by showing it lots of pictures of cats running out in front of cars, maybe gathered during your human driver assisted test runs. You’d also show it lots of pictures of other things that it might encounter which aren’t cats, and tell it which pictures are the ones with cats in. Under the hood, TensorFlow builds a “neural network” which is a crude simulation of the way that our own brains work.&lt;/p&gt;

&lt;p&gt;So let’s give our Raspberry Pi and TensorFlow powered AI a spin – watch my video below:&lt;/p&gt;

&lt;p&gt;Now for a confession – I didn’t actually sit down for hours teaching it myself. Instead I used a ready-made TensorFlow model that Google have trained using &lt;a href="http://www.image-net.org/" rel="noopener noreferrer"&gt;ImageNet&lt;/a&gt;, a free database of over 14 million images. It would take a long time to build this model on the Raspberry Pi itself, because it isn’t very powerful. If you wanted to create a complex model of your own and don’t have access to a supercomputer, you can rent computers from the likes of Google, Microsoft and Amazon to do the work for you.&lt;/p&gt;

&lt;p&gt;So now you’ve seen my “seeing eye” AI, what would you use TensorFlow for? Why not leave a comment and let me know…&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>Apple ARKit – mainstream Augmented Reality</title>
      <dc:creator>Martin Hamilton</dc:creator>
      <pubDate>Mon, 11 Sep 2017 10:00:52 +0000</pubDate>
      <link>https://dev.to/jisc/apple-arkit--mainstream-augmented-reality-ddk</link>
      <guid>https://dev.to/jisc/apple-arkit--mainstream-augmented-reality-ddk</guid>
      <description>&lt;p&gt;Ever wandered through a portal into another dimension, or wondered what it would look like if you could get inside a CAD model or an anatomy simulation? This is the promise of Apple’s new &lt;a href="https://developer.apple.com/arkit/" rel="noopener noreferrer"&gt;ARKit&lt;/a&gt; technology for Augmented Reality, part of iOS11, the latest version of the operating system software that drives hundreds of millions of iPads and iPhones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FIMG_0354-300x225.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Finnovativetechnology.jiscinvolve.org%2Fwp%2Ffiles%2F2017%2F09%2FIMG_0354-300x225.png" alt="Turning the IET into a Mario level using Apple ARKit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Turning the IET into a Mario level using Apple ARKit&lt;/p&gt;

&lt;p&gt;Augmented Reality has been around for years, but in quite a limited way – point your phone/tablet camera at a picture that has special markers on it, and the AR app will typically do something like activate a video or show you a 3D model.&lt;/p&gt;

&lt;p&gt;But anyone wanting to develop an AR app in the past has had to fend with a couple of big problems – firstly the hardware in phones and tablets hasn’t quite been up to the job of real time image processing and position tracking, and secondly there hasn’t been a standard way of adding AR capability to an app.&lt;/p&gt;

&lt;p&gt;With recent improvements in processor technology and more powerful graphics and AI co-processors being shipped on our devices, the technology is now at a level where real time position tracking is feasible. Apple are rumoured to be including a sensor similar to Google’s &lt;a href="https://get.google.com/tango/" rel="noopener noreferrer"&gt;Project Tango&lt;/a&gt; device on the upcoming iPhone 8, which will support real time depth sensing and occlusion. This means that your device will be able to tell where objects in the virtual world are in relation to objects in the real world – e.g. is there a person stood in front of a virtual object?&lt;/p&gt;

&lt;p&gt;Apple and Google are also addressing the standardisation issue by adding AR capabilities to their standard development frameworks – through ARKit on Apple devices and the upcoming &lt;a href="https://developers.google.com/ar/" rel="noopener noreferrer"&gt;ARCore&lt;/a&gt; on Android devices. Apple have something of a lead here, having given developers access to ARKit as part of a preview of iOS11. This means that there are literally hundreds of developers who already know how to create ARKit apps. We can expect that there will be lots of exciting new AR apps appearing in the App Store shortly after iOS11 formally launches – most likely as part of the iPhone 8 launch announcement. If you’re a developer, you can find lots of &lt;a href="https://github.com/olucurious/Awesome-ARKit" rel="noopener noreferrer"&gt;demo / prototype ARKit apps on GitHub&lt;/a&gt;. [[ edit: this post was written before the iPhone 8 / X launch! ]]&lt;/p&gt;

&lt;p&gt;As part of the Jisc Digi Lab at this year’s &lt;a href="http://www.theworldsummitseries.com/events/the-world-academic-summit-2017/" rel="noopener noreferrer"&gt;Times Higher Education World Academic Summit&lt;/a&gt; I made a video that shows a couple of the demo apps that people have made, and gives you a little bit of an idea of how it will be used:&lt;/p&gt;

&lt;p&gt;How can we see people using ARKit in research and education? Well, just imagine holding your phone up to find that the equipment around you in the STEM lab are all tagged with their names, documentation, “reserve me buttons and the like – maybe with a graphical status indicating whether you have had the health and safety induction to use the kit. Or imagine a prospective student visit where the would-be students can hold their phones up to see what happens in each building, and giant arrows appear directing them to the next activity, induction session, students union social etc.&lt;/p&gt;

&lt;p&gt;It’s easy to picture AR becoming widely used in navigation apps like Apple Maps and Google Maps – and for the technology to leap from screens we hold up in front of us to screens that we wear (glasses!). Here’s a video from Keiichi Matsuda that imagines just what the future might look like when Augmented Reality glasses have become the norm:&lt;/p&gt;

&lt;p&gt;How will you use ARKit in research and education? Perhaps you already have plans? Leave a comment below to share your ideas.&lt;/p&gt;

</description>
      <category>arkit</category>
      <category>augmentedreality</category>
    </item>
  </channel>
</rss>
