<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Juyae </title>
    <description>The latest articles on DEV Community by Juyae  (@juyae).</description>
    <link>https://dev.to/juyae</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/juyae"/>
    <language>en</language>
    <item>
      <title>Kotlin Programming Language </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Mon, 24 Jan 2022 02:23:30 +0000</pubDate>
      <link>https://dev.to/juyae/kotlin-programming-language-kej</link>
      <guid>https://dev.to/juyae/kotlin-programming-language-kej</guid>
      <description>&lt;h2&gt;
  
  
  Why Kotlin ?
&lt;/h2&gt;

&lt;p&gt;Kotlin이란 무엇이며, 왜 필요할까요 ? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;최근 받았던 피드백으로 저의 부족한 점을 보완하고자 Kotlin 언어에 대한 공부를 해보았습니다. 지금까지 Kotlin으로 개발을 해왔지만, Kotlin 언어 자체에 대한 공부는 부족했던 것 같아 Kotlin Compiler 개발자가 작성한 문서를 바탕으로 공부한 것을 정리해보겠습니다. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Google은 Google I/O 2019에서 안드로이드 개발이 점차 Kotlin 우선으로 될 것이라고 발표했습니다. 그만큼 Kotlin은 간결하고 실용적이며 타입 추론을 지원하는 정적 타입 지정 언어 입니다. 여기서 말하는 정적 타입 지정의 장점은 다음과 같습니다. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;실행 시점에 어떤 메소드를 호출할지 알아내는 과정이 필요 없으므로 매소드 호출이 더 빠릅니다. &lt;/li&gt;
&lt;li&gt;컴파일러가 프로그램의 정확성을 검증하기 때문에 실행 시 프로그램이 오류로 중단될 가능성이 더 적어집니다. &lt;/li&gt;
&lt;li&gt;코드에서 다루는 객체가 어떤 타입에 속하는지 알 수 있기 때문에 처음 보는 코드를 다룰 때도 더 쉽습니다. &lt;/li&gt;
&lt;li&gt;더 안전하게 리팩토링 할 수 있고, 도구 지원으로 더 정확한 코드 완성 기능을 제공할 수 있으며 IDE 다른 지원 기능도 더 잘 만들 수 있습니다. &lt;/li&gt;
&lt;/ol&gt;


&lt;h4&gt; 요약 &lt;br&gt;
&lt;/h4&gt;

&lt;p&gt;✔️ 코틀린은 타입 추론을 지원하는 정적 타입 지정 언어입니다. 소스 코드의 정확성과 성능을 보장하면서도 소스코드를 간결하게 유지할 수 있습니다. &lt;/p&gt;

&lt;p&gt;✔️ 코틀린의 런타임 라이브러리는 크기가 작고, 코틀린 컴파일러는 안드로이드 API, 주요 IDE, 빌드 시스템을 완전 지원합니다.  &lt;/p&gt;

&lt;p&gt;✔️ @Nullable, @NonNullable 성격을 가지고 있어 NPE를 방지합니다. &lt;/p&gt;

&lt;p&gt;✔️ 자바와의 유연한 상호운용성 : Kotlin 코드는 JVM 바이트 코드로 컴파일 되기 때문에 Kotlin 코드는 자바 코드로 직접 호출될 수 있으며 그 반대의 경우도 마찬가지입니다. 즉, 기존 자바 라이브러리를 Kotlin에서 직접 활용할 수 있습니다.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Data Class
&lt;/h2&gt;

&lt;p&gt;Data class는 편리함을 제공하는 유용한 메소드를 자동으로 생성해주는 클래스입니다. 데이터 클래스가 기본적으로 제공해주는 기능들만 해도, 보일러 플레이트 코드를 줄일 수 있어 매우 간결하며 개발자가 데이터를 담을 수 있는 클래스를 다루기에 매우 편리해졌습니다.  다음은 데이터 클래스 생성시 같이 만들어지는 메소드들 입니다. &lt;/p&gt;

&lt;p&gt;✔️hashCode() : 두 객체가 같은 객체인지 객체의 고유한 주소 값을 int값으로 변환하여 출력해서 확인&lt;br&gt;
✔️copy() : immutable 정의에서 매우 유용하게 사용할 수 있으며 특정 필드값만 바꿔서 복사하기 간편&lt;br&gt;
✔️equals() : 두 객체가 동일한 값(내용)을 담고 있는지 Boolean 값을 출력 &lt;br&gt;
✔️toString() : 프로퍼티의 값들이 알아서 출력&lt;br&gt;
✔️componentsN() : 각 프로퍼티에 번호가 붙어 구조 분해가 가능한 형태 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;특징&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;상속을 받을 수 없습니다. &lt;/li&gt;
&lt;li&gt;val 또는 var로 선언해야 합니다. (immutable better) &lt;/li&gt;
&lt;li&gt;abstract, open, sealed, inner를 붙일 수 없습니다. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Equals와 hashCode
&lt;/h2&gt;

&lt;p&gt;Data class를 활용하면 Kotlin 언어 수준에서 알아서 equals/hashCode를 정의해주기 때문에 data class를 적극 활용하는 게 좋지만 각각 어떤 역할을 하고 있는지, 또 어떤 차이점이 있는지 이번 기회를 통해 알고 넘어가보겠습니다. &lt;br&gt;
몇 가지 규칙을 정리해보면 아래와 같습니다. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;클래스에 equals를 정의했다면 반드시 hashCode도 재정의해야 합니다. &lt;/li&gt;
&lt;li&gt;2개의 객체의 equals가 동일하다면 반드시 hashCode도 동일합니다. &lt;/li&gt;
&lt;li&gt;위의 조건을 지키지 않을 경우, HashMap과 HashSet 등의 컬렉션 사용시 문제가 생길 수 있습니다. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;그렇다면 equals와 hashCode를 왜 같이 재정의 해야하는 걸까?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;hashCode를 equals와 함께 재정의하지 않으면 코드가 예상과 다르게 작동하는 문제가 생긴다. 정확히 말하면 hash 값을 사용하는 Collection(HashSet, HashMap, HashTable)을 사용할 때 문제가 발생합니다. hashCode 메소드의 리턴값이 우선 일치하고 equals 메소드의 리턴 값이 true여야 논리적으로 같은 객체라고 판단합니다. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BalddOtm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150741671-98464692-0630-4859-b29c-6a6dfd1355a3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BalddOtm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150741671-98464692-0630-4859-b29c-6a6dfd1355a3.png" width="880" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Sealed Class
&lt;/h2&gt;

&lt;p&gt;sealed 클래스는 자기 자신이 추상 클래스이고, 자신을 상속 받는 여러 서브 클래스들을 가질 수 있습니다. 이를 사용하면 enum 클래스와 다르게 상속을 지원하게 때문에 상속을 활용하여 풍부한 동작을 구현할 수 있습니다. sealed 클래스는 자신을 상속 받는 서브 클래스의 종류를 제한할 수 있습니다. 또한, 상태값이 바뀌지 않는 서브 클래스의 경우에는 object를 사용하는 것을 권장합니다. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sealed 클래스는 private 생성자만 갖습니다. &lt;/li&gt;
&lt;li&gt;기본적으로 추상 클래스이며 sealed 클래스의 서브 클래스들은 반드시 같은 파일 내에 선언되어야 합니다. &lt;/li&gt;
&lt;li&gt;state 값을 포함 하고 있는 여러 개의 instance를 가질 수 있고 생성자도 각각의 특징에 따라서 유동적으로 가져갈 수 있습니다. 정적 상태의 single instance가 아닌 다양한 state를 사용할 수 있다는 뜻입니다. &lt;/li&gt;
&lt;li&gt;sealed 클래스는 when문 사용시 효과적으로 쓸 수 있고 제한적인 계층 관계를 표현할 수 있습니다. enum class의 확장판과도 같다고 할 수 있습니다. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kotlin Scope Function
&lt;/h2&gt;

&lt;p&gt;특정 객체의 컨텍스트내에서 특정 동작을 실행하기 위한 목적으로 코드 블록을 실행할 수 있게 하는 함수입니다. lambda expression이 제공된 객체에서 Scope Function을 호출하게 되면 해당 함수는 일시적인 범위를 형성하며 해당 범위에서는 객체 이름 없이 객체에 접근할 수 있어 간결한 코딩을 가능하게 해줍니다. Scope Functions는 5가지 종류를 가지고 있습니다. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wMu3a1tm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150745927-6bfdbe69-ba21-4721-ba89-a458c72b211b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wMu3a1tm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150745927-6bfdbe69-ba21-4721-ba89-a458c72b211b.png" width="880" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;let : 객체의 값이 명확해야 할 때 사용하는 함수 &lt;/li&gt;
&lt;li&gt;run : 객체의 값에 접근을 쉽게 할 때 사용하는 함수 &lt;/li&gt;
&lt;li&gt;with : (생성과 동시에 초기화, null값이 될 수 없는) 결과가 필요하지 않을 경우 &lt;/li&gt;
&lt;li&gt;apply :  생성과 동시에 초기화하며 자기 자신을 return &lt;/li&gt;
&lt;li&gt;also : 자기 자신이 필요한데 초기화를 좀 더 쉽게 (수신 객체를 사용하지 않거나 수신 객체의 속성을 변경하지 않고 사용할 때) &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kotlin</category>
      <category>programming</category>
      <category>android</category>
    </item>
    <item>
      <title>Android Clean Architecture</title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Sat, 22 Jan 2022 08:24:08 +0000</pubDate>
      <link>https://dev.to/juyae/android-clean-architecture-1f76</link>
      <guid>https://dev.to/juyae/android-clean-architecture-1f76</guid>
      <description>&lt;h2&gt;
  
  
  Clean Architecture
&lt;/h2&gt;

&lt;p&gt;클린 아키텍처의 개념은 2012년에 Robert C. Martin (Uncle Bob)님이 블로그에 기재하며 세상에 나오게 되었습니다.  Clean Architecture의 목표는 &lt;strong&gt;계층을 분리하여 관심사를 분리하는 것&lt;/strong&gt;입니다. 사용자에게 제공되는 어플리케이션은 수많은 기능들이 있기 때문에 복잡도가 높고 유지보수의 용이함을 고려하여 구조화 해야 합니다. 클린 아키텍처는 총 4가지의 계층으로 이루어져 있습니다. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QErt2iDw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150629097-2dcde30c-b598-4495-b624-47daf164d535.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QErt2iDw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150629097-2dcde30c-b598-4495-b624-47daf164d535.png" width="880" height="633"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h5&gt;의존성 규칙은 반드시 외부에서 내부로, 저수준 정책에서 고수준 정책으로 향해야 합니다. 위 그림에서는 안쪽으로 갈수록 의존성이 낮아집니다.&lt;br&gt;&lt;br&gt;
&lt;/h5&gt;


&lt;h3&gt;각 계층의 역할에 대해 알아볼까요 ?&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OMmmeZr---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150631074-4a697ecc-a752-4591-8b6d-df32ca3477c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OMmmeZr---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150631074-4a697ecc-a752-4591-8b6d-df32ca3477c0.png" width="880" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Entities&lt;/strong&gt; - 엔티티는 가장 일반적이면서 고수준의 규칙을 캡슐화하게 됩니다. 다시 말해, 엔티티는 비즈니스 규칙을 캡슐화합니다. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use cases&lt;/strong&gt; - 유스케이스는 만들고자 하는 서비스를 사용하는 유저가 이 서비스를 통해 하고자 하는 것을 말해주는 "Screaming Architecture" ,그 자체로 해당 소프트웨어가 무엇인지 알 수 있도록 명확해야 합니다. 즉, 최소 요청 단위라고 할 수 있습니다. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interface Adapters &amp;amp; Presenters&lt;/strong&gt; - 인터페이스 어댑터는 순수한 비즈니스 로직을 담당하는 역할을 합니다. Domain에 해당하는 Entity와 Usecase의 형식에서 데이터베이스에 적용할 수 있는 형식으로 변환합니다. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frameworks &amp;amp; Drivers&lt;/strong&gt; - 프레임워크, 데이터베이스, UI, Http Client 등으로 구성된 가장 바깥쪽 계층입니다. &lt;/li&gt;
&lt;/ol&gt;


&lt;h3&gt;이와 같이 관심사를 분리하면 장점이 뭘까요? &lt;br&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;프로젝트 유지 관리에 용이하다. &lt;/li&gt;
&lt;li&gt;테스트 코드 작성에 용이하다. &lt;/li&gt;
&lt;li&gt;새로운 기능을 빠르게 적용하고 수정할 수 있다. &lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Android Clean Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z5--NDkr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150628995-89e2a165-bcdf-4af7-b091-e5f3bedfd5cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z5--NDkr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/150628995-89e2a165-bcdf-4af7-b091-e5f3bedfd5cb.png" width="880" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;클린 아키텍처를 안드로이드에 적용시킬 때는 위의 그림과 같이 Presentation, Data, Domain 총 3개의 계층으로 나눠지게 됩니다. 다음 계층들이 어떤 역할을 하는지 알아봅시다. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.  Presentation&lt;/strong&gt;&lt;br&gt;
UI와 관련된 부분을 담당합니다. Activity, Fragment, ViewModel 및 Presenter를 포함합니다. 프레젠테이션 계층은 가장 바깥쪽에 위치하여 Domain 계층과 의존성을 가집니다. &lt;br&gt;
&lt;strong&gt;2. Data&lt;/strong&gt; &lt;br&gt;
Repository 구현체와 Datasource, 서버 통신과 같은 데이터를 포함합니다. 또한 mapper 클래스를 사용하여 데이터 계층의 모델을 Domain 계층의 모델로 변환해주는 역할을 하게 됩니다. Data 계층은 Domain 계층에 의존성을 가집니다. &lt;br&gt;
&lt;strong&gt;3. Domain&lt;/strong&gt;&lt;br&gt;
어플리케이션의 비즈니스 로직에 필요한 Entity와 Usecase를 포함하고 있습니다. (+ Repository 인터페이스를 포함합니다.) Presentation과 Data 계층에 대한 의존성을 가지지 않고 독립적으로 분리되어 있습니다. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;클린 아키텍처를 도입한 프로젝트들을 진행해보며 개념을 잘 알지 못하고 사용하고 있었던 것을 깨닫게 되었고 각 계층이 어떤 역할을 하는지 확실히 알고 사용하는 것이 중요하다는 것을 다시금 느끼게 되었습니다. 프로젝트 크기가 커질수록 큰 효과를 얻을 수 있고 멀티 모듈을 사용하여 확실하게 관심사를 분리하여 사용해준다면 앞으로 더 좋은 코드를 짤 수 있을 것 같습니다. 해당 아키텍처를 적용한 프로젝트는 다음 포스트에서 보다 자세하게 적어보겠습니다! &lt;/p&gt;

</description>
      <category>android</category>
      <category>architecture</category>
      <category>docs</category>
    </item>
    <item>
      <title>TF-Agents Tutorial </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Sun, 22 Aug 2021 06:01:58 +0000</pubDate>
      <link>https://dev.to/juyae/tf-agents-tutorial-2jkp</link>
      <guid>https://dev.to/juyae/tf-agents-tutorial-2jkp</guid>
      <description>&lt;h2&gt;
  
  
  Reinforcement Learning with TF-Agents
&lt;/h2&gt;

&lt;p&gt;Reinforcement learning (RL) is a general framework where agents learn to perform actions in an environment so as to maximize a reward. The two main components are the environment, which represents the problems to be solved, and the agent, which represents the learning algorithm. &lt;/p&gt;

&lt;p&gt;The agent and environment continuously interact with each other. At each time step, the agent takes an action on the environment based on its policy , where is the current observation from the environment, and receives a reward and the next observation from the environment. The goal is to improve the policy so as to maximize the sum of rewards. &lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C9MN3n_m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130343598-af1899db-c0da-42eb-a97c-05f9279d9c02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C9MN3n_m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130343598-af1899db-c0da-42eb-a97c-05f9279d9c02.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TF-Agents makes designing, implementing and testing new RL algorithms easier, by providing well tested modular components that can be modified and extended. It enables fast code iteration, with good test integration and benchmarking. &lt;/p&gt;

&lt;h2&gt;
  
  
  Cartpole Environment
&lt;/h2&gt;

&lt;p&gt;The Cartpole environment is one of the most well known classic RL. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  The observation from the environment  st  is a 4D vector representing the position and velocity of the cart, and the angle and angular velocity of the pole.&lt;/li&gt;
&lt;li&gt;  The agent can control the system by taking one of 2 actions  at: push the cart right (+1) or left (-1).&lt;/li&gt;
&lt;li&gt;  A reward  rt+1=1  is provided for every timestep that the pole remains upright. The episode ends when one of the following is true:

&lt;ul&gt;
&lt;li&gt;  the pole tips over some angle limit&lt;/li&gt;
&lt;li&gt;  the cart moves outside of the world edges&lt;/li&gt;
&lt;li&gt;  200 time steps pass. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DQN Agent
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf"&gt;DQN (Deep Q-Network) algorithm&lt;/a&gt; was developed by DeepMind in 2015. It was able to solve a wide range of Atari games (some to superhuman level) by combining reinforcement learning and deep neural networks at scale. The algorithm was developed by enhancing a classic RL algorithm called Q-Learning with deep neural networks and a technique called &lt;em&gt;experience replay&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dependencies
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uVMRB3sD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344057-eefe0df3-e868-40f6-a1a5-9e34b5e6d66c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uVMRB3sD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344057-eefe0df3-e868-40f6-a1a5-9e34b5e6d66c.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hyperparameters
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--77lgH8EN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344088-5e9d664b-1ce0-4a23-b94c-9dc50d2ee224.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--77lgH8EN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344088-5e9d664b-1ce0-4a23-b94c-9dc50d2ee224.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Cartpole environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;observation&lt;/code&gt;  is an array of 4 floats:

&lt;ul&gt;
&lt;li&gt;  the position and velocity of the cart&lt;/li&gt;
&lt;li&gt;  the angular position and velocity of the pole&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;reward&lt;/code&gt;  is a scalar float value&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;action&lt;/code&gt;  is a scalar integer with only two possible values:

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;0&lt;/code&gt;  — "move left"&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;1&lt;/code&gt;  — "move right"&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Training the Agent
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;It will take ~7 minutes to run&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xQ2jI9OI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344210-80e32fce-16c2-4956-a30f-b8d5522137a7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xQ2jI9OI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/130344210-80e32fce-16c2-4956-a30f-b8d5522137a7.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://badge.fury.io/py/tensorflow"&gt;&lt;img src="https://camo.githubusercontent.com/a7b5b417de938c1faf3602c7f48f26fde8761a977be85390fd6c0d191e210ba8/68747470733a2f2f696d672e736869656c64732e696f2f707970692f707976657273696f6e732f74656e736f72666c6f772e7376673f7374796c653d706c6173746963" alt="Python"&gt;&lt;/a&gt;&lt;a href="https://www.tensorflow.org/api_docs/"&gt;&lt;img src="https://camo.githubusercontent.com/5fee71a94d467d0fa33c4469ad6e6ef356042a8ca784a0c0eae6a04796b77d38/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f6170692d7265666572656e63652d626c75652e737667" alt="Documentation"&gt;&lt;/a&gt; &lt;a href="https://badge.fury.io/py/tf-agents"&gt;&lt;img src="https://camo.githubusercontent.com/f1c38aeed864cac806134ef8fe62a050a4e4a3c116f6a2fb8f09b6b859eed4ed/68747470733a2f2f62616467652e667572792e696f2f70792f74662d6167656e74732e737667" alt="PyPI tf-agents"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devjournal</category>
      <category>deeplearning</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>AOS Hackathon </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Sun, 25 Jul 2021 09:22:05 +0000</pubDate>
      <link>https://dev.to/juyae/aos-hackathon-3b4l</link>
      <guid>https://dev.to/juyae/aos-hackathon-3b4l</guid>
      <description>&lt;h2&gt;
  
  
  Participated in Hackathon for Android Developer
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;2021.06.27 ~ 2021.07.17 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Journey, happy mate who will wake up your lost daily life
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Journey remind us of things that are important in our daily lives but are forgotten. Journey's unique concept helps users to develop a habit of finding happiness by easily approaching them in their daily life. Users can receive daily greetings via push notifications, and complete random challenges. Also, you can record your own diary and share your small events with other users in Journey's community feed. &lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=3Yg3qSHMRtY"&gt;Journey's Preview&lt;/a&gt; &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Open Source Library
&lt;/h2&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Library&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://developer.android.com/kotlin/ktx/extensions-list"&gt;Activity-KTX&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Activity ViewModel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://developer.android.com/kotlin/ktx/extensions-list"&gt;Fragment-KTX&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fragment Shared ViewModel&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://developer.android.com/jetpack/androidx/releases/navigation"&gt;Jetpack Navigation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fragment Transition&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/ausichenko/android-lifecycles"&gt;LifeCycle&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Fragment Lifecycle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/ravi8x/LiveData"&gt;LiveData&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;LifeCycleOwner&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/square/retrofit"&gt;Retrofit2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Retrofit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/google/gson"&gt;Gson&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Json to Gson&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://square.github.io/okhttp/"&gt;OkHttp&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Retrofit2 Token Interceptor &amp;amp;  Util&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Google FCM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://developer.android.com/jetpack/androidx/releases/hilt"&gt;Hilt&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;DI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/bumptech/glide"&gt;Glide&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;URL Image&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Main Service
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Push Notification&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  Process
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;FCM ,  Firebase Push Notification&lt;/strong&gt; &lt;br&gt;
 &lt;strong&gt;Tools-&amp;gt; Firebase -&amp;gt; Cloud messaging&lt;/strong&gt; &lt;br&gt;
     (1) Connect your app to firebase &lt;br&gt;
     (2) Add FCM to your app &lt;br&gt;
     (3) Firebase -&amp;gt; My Console -&amp;gt; Clouding Messaging -&amp;gt; Add Your Application -&amp;gt; Connect &lt;br&gt;
    &lt;strong&gt;FirebaseMessagingService class File&lt;/strong&gt; &lt;br&gt;
    &lt;strong&gt;Get your application Token&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;2. Random Challenge &amp;amp; Course&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  Process
&lt;/h3&gt;

&lt;p&gt;User can select one day, two day, three day.. etc challenge. According to date, Journey provide users different stamp views.&lt;br&gt;&lt;br&gt;
Then, users select challenge view and check mission. &lt;br&gt;
When they complete random challenge we give them congrats dialog message. &lt;br&gt;
Otherwise if user do not complete challenge, we give them lightening dialog message.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;3. Community Feed &amp;amp; Diary&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  Process
&lt;/h3&gt;

&lt;p&gt;User can browse other users feed,  click their posts and give them likes. Also we provide feed from the latest or like order. &lt;br&gt;
In diary , user can write their small events from daily life privately. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rir6z0lz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125825984-5d6087d6-e8bd-4b4b-8ad8-004736141a6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rir6z0lz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125825984-5d6087d6-e8bd-4b4b-8ad8-004736141a6d.png"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2_TiLhpI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125818953-985f2d8b-442d-41e6-833c-c82aaa95f672.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2_TiLhpI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125818953-985f2d8b-442d-41e6-833c-c82aaa95f672.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--61R9yXNC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/126058907-1f14f778-5784-432e-bfa4-1a7c05110391.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--61R9yXNC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/126058907-1f14f778-5784-432e-bfa4-1a7c05110391.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zKLqfnUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/126058934-63c29b0c-9521-4465-ba1d-615f9e273e7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zKLqfnUQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/126058934-63c29b0c-9521-4465-ba1d-615f9e273e7d.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tKJd9MwT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819660-29a88675-1b4d-4a72-b358-5917d71b4f6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tKJd9MwT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819660-29a88675-1b4d-4a72-b358-5917d71b4f6b.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--miMVZbQE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125906281-1427c872-10af-4cad-b791-c9f6722ee39d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--miMVZbQE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125906281-1427c872-10af-4cad-b791-c9f6722ee39d.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--30YUDacN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819715-3f5a355f-5ee7-4465-999d-455550becd82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--30YUDacN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819715-3f5a355f-5ee7-4465-999d-455550becd82.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8-kcAu5B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125941262-a64007c8-62d6-4f45-8188-726ad4ace3e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8-kcAu5B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125941262-a64007c8-62d6-4f45-8188-726ad4ace3e0.png"&gt;&lt;/a&gt;&lt;br&gt;
    &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p5_GTHM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125941379-a457ebc3-fbc8-4d84-a4c7-141d117d1ec7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p5_GTHM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/125941379-a457ebc3-fbc8-4d84-a4c7-141d117d1ec7.png"&gt;&lt;/a&gt;&lt;br&gt;
  &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bj1OpBsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819842-019d3d42-0af6-4775-8b3e-276752416deb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bj1OpBsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/28949235/125819842-019d3d42-0af6-4775-8b3e-276752416deb.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Power of Collaboration
&lt;/h3&gt;

&lt;p&gt;“&lt;em&gt;A hackathon is an event organized by talented people for talented people. You don’t have to have supernatural skills or extraordinary knowledge in order to participate. You just have to be willing to try something new. During this event you’ll discover how you can apply your skills in development, management, design, art, idea generation, and solution election.&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;Every one has heard about hackathons, this is especially true for a university full of engineering students. However, not many of us has actually experienced one before. Before I participated in Journey team, I had the common misconception that a hackathon only involves coding for an entire day. In fact, my 3 weeks , every day was my gift, challenge, and reward. Not all hackathons are about coding. Every day I learned how to cooperate with my android team, design, iOS, and server. Together, we made precious day, tremendous growth , and relationship. Of course, working hours without sleep, it wasn't the problem. I made priceless memory with my Journey Team. Loveya 🖤 &lt;/p&gt;

</description>
      <category>android</category>
      <category>devjournal</category>
      <category>kotlin</category>
      <category>github</category>
    </item>
    <item>
      <title>Basics of Computer Graphics </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Fri, 25 Jun 2021 03:39:04 +0000</pubDate>
      <link>https://dev.to/juyae/basics-of-computer-graphics-4gaa</link>
      <guid>https://dev.to/juyae/basics-of-computer-graphics-4gaa</guid>
      <description>&lt;h2&gt;
  
  
  3D Computer Graphics
&lt;/h2&gt;

&lt;p&gt;Computer graphics is an art of drawing pictures on computer screens with the help of programming. It involves computations, creation, and manipulation of data. In other words, we can say that computer graphics is a rendering tool for the generation and manipulation of images.&lt;/p&gt;

&lt;p&gt;In the 2D system, we use only two coordinates X and Y but in 3D, an extra coordinate Z is added. 3D graphics techniques and their application are fundamental to the entertainment, games, and computer-aided design industries. It is a continuing area of research in scientific visualization.&lt;/p&gt;

&lt;p&gt;Furthermore, 3D graphics components are now a part of almost every personal computer and, although traditionally intended for graphics-intensive software such as games, they are increasingly being used by other applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Projection 투상변환 &lt;/li&gt;
&lt;li&gt;Translation &amp;amp; Rotation &lt;/li&gt;
&lt;li&gt;Polygon Meshes&lt;/li&gt;
&lt;li&gt;Morphing&lt;/li&gt;
&lt;li&gt;Projects &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Projection
&lt;/h2&gt;

&lt;p&gt;가시변환 &lt;strong&gt;Viewing Transformation&lt;/strong&gt;이라고도 불리는 투상변환은 3D 물체를 2D 화면으로 매핑하는 작업을 말한다. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;View Plane, Projection Plane : 투상면&lt;/li&gt;
&lt;li&gt;View Point = Eye Position = Camera Position = COP : 관찰자 위치 &lt;/li&gt;
&lt;li&gt;Projectors : 물체 곳곳을 향한 투상선 &lt;/li&gt;
&lt;li&gt;Line of Sight : WCS 원점 또는 초점을 향하는 시선 &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Parallel Projection
&lt;/h2&gt;

&lt;p&gt;평행투상 - 시점이 물체로부터 무한대의 거리에 있다고 간주 &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;투상선이 평행 &lt;/li&gt;
&lt;li&gt;물체의 평행선은 투상 후에도 평행 &lt;/li&gt;
&lt;li&gt;시점과의 거리에 무관하게 같은 길이의 물체는 같은 길이로 투상 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Parallel projection discards z-coordinate and parallel lines from each vertex on the object are extended until they intersect the view plane. In parallel projection, we specify a direction of projection instead of center of projection. In parallel projection, the distance from the center of projection to project plane is infinite. In this type of projection, we connect the projected vertices by line segments which correspond to connections on the original object.&lt;/p&gt;

&lt;h2&gt;
  
  
  Perspective Projection
&lt;/h2&gt;

&lt;p&gt;원근투상 - 시점이 물체로부터 유한한 거리에 있다고 간주 &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;투상선이 방사선 모양으로 퍼짐&lt;/li&gt;
&lt;li&gt;카메라나 사람의 눈이 물체를 보는 방법 &lt;/li&gt;
&lt;li&gt;&lt;p&gt;동일한 크기의 물체라도 시점으로부터 멀리 있는 것은 작게 보이고 가까운 것은 크게 보임 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;소실점 (VP : Vanishing Point) 원근투상 결과 평행선이 만나는 점 = 시점 높이 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;소실점의 개수에 따라 일점투상, 이점투상, 삼점투상 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;원근변환을 사용할 경우 물체 정점간의 거리에 대한 축소율이 달라짐 &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;One point&lt;/strong&gt;  perspective projection is simple to draw.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Two point&lt;/strong&gt;  perspective projection gives better impression of depth.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Three point&lt;/strong&gt;  perspective projection is most difficult to draw.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Translation &amp;amp; Rotation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Translation&lt;/strong&gt;&lt;br&gt;
In 3D translation, we transfer the Z coordinate along with the X and Y coordinates. The process for translation in 3D is similar to 2D translation. A translation moves an object into a different position on the screen. &lt;br&gt;
&lt;strong&gt;Rotation&lt;/strong&gt;&lt;br&gt;
3D rotation is not same as 2D rotation. In 3D rotation, we have to specify the angle of rotation along with the axis of rotation. We can perform 3D rotation about X, Y, and Z axes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bL8QDknQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123363710-0d5a9c80-d5ae-11eb-8b4a-d942904f05fb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bL8QDknQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123363710-0d5a9c80-d5ae-11eb-8b4a-d942904f05fb.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✔️ Rotation Result &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XRP2nZ_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123364019-a5588600-d5ae-11eb-9676-ed7c804866e3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XRP2nZ_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123364019-a5588600-d5ae-11eb-9676-ed7c804866e3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Polygon Meshes for Rendering
&lt;/h2&gt;

&lt;p&gt;렌더링 전 물체의 표현을 만드는 작업을 모델링이라고 부르는데 , &lt;br&gt;
음함수로는 GPU를 표현하기 어려운 점이 있어 평면의 개수를 샘플링 하는 과정을 거친다. 이때 &lt;strong&gt;일정한 꼭지점이 샘플링 되는데 그 꼭지점들을 선으로 이어 다각형&lt;/strong&gt;으로 만든다. 이를 폴리곤 메시라 부르며 좌표도 3차원으로 표현이 되고 vertex array 셀에는 좌표 이외에도 많은 정보들이 저장된다. &lt;/p&gt;

&lt;p&gt;3D surfaces and solids can be approximated by a set of polygonal and line elements. Such surfaces are called  polygonal meshes. In polygon mesh, each edge is shared by at most two polygons. The set of polygons or faces, together form the “skin” of the object.&lt;br&gt;
This method can be used to represent a broad class of solids/surfaces in graphics. A polygonal mesh can be rendered using hidden surface removal algorithms. The polygon mesh can be represented by three ways&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit representation&lt;/li&gt;
&lt;li&gt;Pointers to a vertex list&lt;/li&gt;
&lt;li&gt;Pointers to an edge list &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advantage&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  It can be used to model almost any object.&lt;/li&gt;
&lt;li&gt;  They are easy to represent as a collection of vertices.&lt;/li&gt;
&lt;li&gt;  They are easy to transform.&lt;/li&gt;
&lt;li&gt;  They are easy to draw on computer screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Curved surfaces can only be approximately described.&lt;/li&gt;
&lt;li&gt;  It is difficult to simulate some type of objects like hair or liquid.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6uTQaJzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123364905-3419d280-d5b0-11eb-987c-f380e38eccb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6uTQaJzG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123364905-3419d280-d5b0-11eb-987c-f380e38eccb4.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Morphing
&lt;/h2&gt;

&lt;p&gt;하나의 물체가 전혀 다른 물체로 변화하는 기법. 두 개의 서로 다른 이미지나 3차원 모델 사이의 변화하는 과정을 서서히 나타낸다. A -&amp;gt; B &lt;br&gt;
&lt;strong&gt;A&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sCFpwz7z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123365315-05502c00-d5b1-11eb-9173-1f51bdc5d80b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sCFpwz7z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123365315-05502c00-d5b1-11eb-9173-1f51bdc5d80b.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;B&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GKYwLl0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123365421-36306100-d5b1-11eb-9129-6304e99a96b4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GKYwLl0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/123365421-36306100-d5b1-11eb-9129-6304e99a96b4.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Morphing is an interpolation technique used to create from two objects a series of intermediate objects that change continuously to make a smooth transition from the source to the target. Morphing has been done in two dimensions by varying the values of the pixels of one image to make a different image, or in three dimensions by varying the values of three-dimensional pixels. We're presenting here a new type of morphing, which transforms the geometry of three dimensional models, creating intermediate objects which are all clearly defined three-dimensional objects, which can be translated, rotated, scaled, zoomed-into.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/jooyae/ComputerGraphics"&gt;ComputerGraphics&lt;/a&gt;  you can overview computer graphic projects in my github. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>cpp</category>
    </item>
    <item>
      <title>Progressive Growing GANs for Improved Quality </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Sat, 19 Jun 2021 08:18:00 +0000</pubDate>
      <link>https://dev.to/juyae/upgrade-pggan-lij</link>
      <guid>https://dev.to/juyae/upgrade-pggan-lij</guid>
      <description>&lt;h2&gt;
  
  
  Progressive Growing GANs for Improved quality PGGAN
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;What's PGGAN? Progressive Growing of GANs for Improved Quality, Stability, and Variation. Key idea is to grow both the generator and discriminator progressively. Starting from a low resolution, I add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. Also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, I suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, I could construct a higher-quality version of the CelebA dataset.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Figures&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Progressive growing of GANs&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EppvkpT9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122634593-00572c80-d11a-11eb-8ed5-bd0e0a4478ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EppvkpT9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122634593-00572c80-d11a-11eb-8ed5-bd0e0a4478ad.png"&gt;&lt;/a&gt;&lt;br&gt;
Train starts with both the generator G and discriminator D having a low spatial resolution of 4X4 pixels. As advances, I add layers to G and D for spatial resolution of the generated images. All layers remain trainable throughout the process. NXN refers to convolutional layers operating on NXN spatial resolution. This allows stable synthesis in high resolutions and also speeds up training considerably. One the right, six pictures above generated using progressive growing at 1024X1024. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;생성기와 분류기의 layer를 대칭적으로 하나씩 쌓아가며 학습할 경우 Large scale structure 먼저 파악한 후 세세한 Finer scale details로 좁혀가는 원리이다. Layer를 늘리는 시점의 충격을 완화하기 위해 Highway Network의 구조를 사용하여 학습의 안정성과 속도를 올릴 수 있다. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Highway Network&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gf_zkGG5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635018-5f1da580-d11c-11eb-8818-cb30a6d1fbde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gf_zkGG5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635018-5f1da580-d11c-11eb-8818-cb30a6d1fbde.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When doubling the resolution of generator G and discriminator D I fade in the new layers smoothly. This illustrates the transition from 16X16 images to 32X32 images. &lt;br&gt;
During the transition b, the layers that operate on the higher resolution like a residual block, whose weight increases linearly from 0 to 1. &lt;strong&gt;toRGB&lt;/strong&gt; a layer that projects feature vectors to RGB colors and &lt;strong&gt;fromRGB&lt;/strong&gt; does the reverse both 1X1 convolutions. When training the discriminator, real images that are downscaled to match the current resolution of the network. During a resolution transition, interpolate between two resolutions of the real images, similarly to how the generator output combines two resolutions. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lerp(a, b, t): return a + (b - a) * t
def lerp_clip(a, b, t): return a + (b - a) * tf.clip_by_value(t, 0.0,           1.0)
//omitted
if structure == 'linear':
img = images_in
x = fromrgb(img, resolution_log2)
for res in range(resolution_log2, 2, -1):
    lod = resolution_log2 - res # lod: levels-of-details
    x = block(x, res)
    img = downscale2d(img)
    y = fromrgb(img, res - 1)
    with tf.variable_scope('Grow_lod%d' % lod):
        x = lerp_clip(x, y, lod_in - lod)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;비지도학습 GAN은 학습 데이터의 variation에 대한 subset만을 학습하는 경향이 있다. Minibatch Discriminator 라는 기법을 통해 각 이미지와 미니배치의 정보를 분류기에 같이 제공함으로써 다양성에 대한 학습 효과를 증진시키고 새로운 파라미터 없이 진행을 한다. &lt;/li&gt;
&lt;li&gt; 전체 minibatch에 대해, 각 feature의 각 spatial location의 표준편차를 구한다. (Input: N x Cx H x W, Output: C x H x W)&lt;/li&gt;
&lt;li&gt; 앞서 계산된 값으로 각 spatial location에서 모든 feature에 대한 평균을 구한다. (Input: C x H x W, Output: 1 x H x W)&lt;/li&gt;
&lt;li&gt;계산된 평균을 마지막 샘플 layer 뒤에 추가한다. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B8bjM17W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635256-cbe56f80-d11d-11eb-95e9-d70c7ffc7f25.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B8bjM17W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635256-cbe56f80-d11d-11eb-95e9-d70c7ffc7f25.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Normalization in gerator and discriminator&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Equalized learning rate: 기존 방식과 달리 weight의 초기화는 N(0,1)로 하되, runtime 중에 동적으로 weight parameter의 스케일을 조절해주자는 아이디어. He initializer에서 제안된 per-layer normalization constant로 weight parameter를 나누어준다. 사람들에게 즐겨 사용되는 RMSProp이나 ADAM의 경우 gradient update에 대해 normalizing을 해주게 되는데, 이것이 parameter들에 대한 scale과는 별개로 계산되는 것이기 때문에 변동이 큰 parameter에 대해서는 학습에 효과적이지 않을 수 있다. 이 방법을 사용하면 모든 parameter가 같은 dynamic range를 갖게 하므로 동일한 학습속도를 보장할 수 있다. &lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_weight(shape, gain=np.sqrt(2), use_wscale=False, fan_in=None):
if fan_in is None: fan_in = np.prod(shape[:-1])
std = gain / np.sqrt(fan_in) 
if use_wscale:
    wscale = tf.constant(np.float32(std), name='wscale')
    return tf.get_variable('weight', shape=shape, initializer=tf.initializers.random_normal()) 
else:
    return tf.get_variable('weight', shape=shape, initializer=tf.initializers.random_normal(0, std))
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✔️ &lt;em&gt;GAN은 generator와 discriminator간의 unhealthy competition으로 인해 signal magnitude가 점차 증가되기 쉽다. 이에 대한 해결책으로 이전의 연구들에서 변형된 batch normalization의 사용이 제안되었으나, GAN에서는 이슈는 (BN에서 제기된 문제상황이었던) covariate shift가 아니므로 이 연구에서는 signal magnitude와 competition에 대해 적절히 제한하는 것에 초점을 맞춘다.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;✔️ &lt;em&gt;Pixelwise feature vector normalization in generator: generator와 discriminator의 magnitude가 competition에 의해 통제불능의 상태가 되는 것을 막기위해 각 convolutional layer의 feature layer에 대해 pixel 단위로 normalizing을 해준다. 결과물의 품질에 큰 영향을 준 것은 아니지만 필요시 signal magnitudes가 점차 증가되는 것을 막아주었다.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6VSbb_QH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635791-c89fb300-d120-11eb-9693-75ab67e5476e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6VSbb_QH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635791-c89fb300-d120-11eb-9693-75ab67e5476e.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def pixel_norm(x, epsilon=1e-8):
with tf.variable_scope('PixelNorm'):
    return x * tf.rsqrt(tf.reduce_mean(tf.square(x), axis=1, keepdims=True) + epsilon)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Increasing Variation using MiniBatch Standard Deviation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;현재 해상도에 맞춰 우리가 가지고 있는 Real Images를 Downscaling 하면 저해상도 이미지도 바뀌게 된다. 하지만 고해상도 이미지에 대한 정보도 담고 있다는 것을 알아두자. 두개의 해상도 사이를 선형보간 (interpolation)하는 과정을 거치며 학습을 진행한다. 일반적인 비지도학습의 특징으로 훈련 중 train data에서 찾은 feature information보다 less variation한 generator image를 생성해내는 경향이 있다. 그로 인한 고해상도 이미지를 생성해내기 어렵다는 단점이 발생하게 되는데 이를 해결하고자 PGGAN 에서는 표준편차 미니배치 기법을 제안한 것이다. 이를 사용하면 Discriminator 가 미니배치 전체에 대한 Feature statistics 를 계산하도록 하기 위해서 실제 훈련 데이터와 Fake batch 이미지를 구별해내도록 도움을 줄 수 있다. 또한 생성된 배치 이미지에서 계산된 Feature statics가 training된 batch image data의 Statics와 더 유사하게 만들도록 생성기가 낮은 분산값을 갖는게 아닌 더 풍부한 분산값을 갖도록 권장한다! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Convergence and Training Speed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P4oAe9Vy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635702-6e065700-d120-11eb-886e-263b32a6518a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4oAe9Vy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635702-6e065700-d120-11eb-886e-263b32a6518a.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-Resolution Image Generation Using CELEBA-HQ Datasets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qGXdr-5S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635832-fe449c00-d120-11eb-91ae-5549a4ac0e30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qGXdr-5S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122635832-fe449c00-d120-11eb-91ae-5549a4ac0e30.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;고해상도 결과를 위해서는 &lt;strong&gt;8 Telsa V100 GPU&lt;/strong&gt;로 &lt;strong&gt;4일&lt;/strong&gt;간의 훈련이 필요하다. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  MS-SSIM에 비해 SWD가 색, 질감, 뷰포인트에 대한 variation을 훨씬 잘 잡아냄. 학습이 중간에 중단된 모델과 학습을 온전히 끝낸 모델간의 결과물을 비교했을때 MS-SSIM는 큰 차이를 잡아내지 못했다. 또한 SWD는 training set과 유사한 생성 이미지의 분포에 대해 잘 찾아내는 모습을 보였다.&lt;/li&gt;
&lt;li&gt;  progressive growing은 better optimum에 수렴시키고 training time을 줄이는 효과를 보였다. 약 640만개의 이미지 학습을 기준으로 non-progressive variant에 비해 약 5.4배 빠른 속도를 보였다.&lt;/li&gt;
&lt;li&gt;  High resolution output을 보이기 위해 AppendixC의 방법을 사용해 CELEBA의 high-quality 버전을 만들어냈다. (1024 x 1024, 30000 장) Tesla V100 GPU 8개로 4일동안 학습하였고 아주 멋진 결과물을 만들어냈다. (위에 데모 참고)학습에는 LSGAN과 WGAN-GP의 loss function이 각각 사용되었는데, 아무래도 LSGAN이 학습에 좀 더 불안정한 모습을 보였다. LSGAN의 loss function을 사용하는 경우에는 약간의 추가적인 테크닉이 필요한데, 이는 Appendix B를 참고하자.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>deeplearning</category>
      <category>python</category>
    </item>
    <item>
      <title>Convolutional Neural Networks to Categorizing</title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Fri, 18 Jun 2021 03:00:03 +0000</pubDate>
      <link>https://dev.to/juyae/convolutional-neural-networks-to-categorizing-55pd</link>
      <guid>https://dev.to/juyae/convolutional-neural-networks-to-categorizing-55pd</guid>
      <description>&lt;h2&gt;
  
  
  &lt;em&gt;CNN - the concept behind recent breakthroughs and developments in deep learning&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;These are various datasets that I used for applying convolutional neural networks. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MNIST&lt;/li&gt;
&lt;li&gt;CIFAR-10&lt;/li&gt;
&lt;li&gt;ImageNet &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Before going for transfer learning, try improve your base CNN models.&lt;/em&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Classify Hand-written Digits on MNISTS Datasets &lt;/li&gt;
&lt;li&gt;Identify images from CIFAR-10 Datasets using CNNs &lt;/li&gt;
&lt;li&gt;Categorizing images of ImageNet Datasets using CNNs &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Classify Hand written digits
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;If you're studying deep learning, I confidently know that you already see this picture. This dataset is often used for practicing algorithm made for image classification as the dataset is fairly easy to conquer. Hence, I recommend that this should be your first dataset if you just forayed this field.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BagsT1-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122495688-7463d880-d025-11eb-91af-be8da3bada3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BagsT1-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122495688-7463d880-d025-11eb-91af-be8da3bada3f.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MNIST is a well known datasets used in Computer Vision, composed of images that are handwritten digits(0-9), split into a training set of &lt;strong&gt;50,000 images&lt;/strong&gt; and a test set of 10,000 where each image is &lt;strong&gt;28X28 pixels&lt;/strong&gt; in width and height. &lt;/li&gt;
&lt;li&gt;Before we train a CNN model, we have to build a basic &lt;strong&gt;Fully Connected Neural Network&lt;/strong&gt; for the dataset. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fully Connected Neural Network
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Flatten the input image dimensions to 1D (width pixels x height pixels)&lt;/li&gt;
&lt;li&gt; Normalize the image pixel values (divide by 255)&lt;/li&gt;
&lt;li&gt; One-Hot Encode the categorical column&lt;/li&gt;
&lt;li&gt; Build a model architecture (Sequential) with Dense layers&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Train the model and make predictions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_t1vP0vI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122496700-28b22e80-d027-11eb-9895-a3487a8d3d15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_t1vP0vI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122496700-28b22e80-d027-11eb-9895-a3487a8d3d15.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;✔️ &lt;em&gt;You can get good validation accuracy around 97%. One major advantage of using CNNs over NNs is that you do not need to flatten the input images to 1D as they are capable of working with image data in 2D. This helps in retaining the “spatial” properties of images. I additionally add  more Convolution layers and hyperparameters for accuracy. So I get 98%+ accuracy.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RlK32_xO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122497045-b8f07380-d027-11eb-8b46-4dd7d032fc5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RlK32_xO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122497045-b8f07380-d027-11eb-8b46-4dd7d032fc5e.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Identify CIFAR10
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Now we will distinguish images 32X32 pixel , 50000 training images and 10,000 testing images. These images are taken in varying lighting conditions and at different angles, and since these are colored images, you will see that there are many variations in the color itself of similar objects (for example, the color of ocean water). If you use the simple &lt;a href="https://courses.analyticsvidhya.com/courses/convolutional-neural-networks-cnn-from-scratch?utm_source=blog&amp;amp;utm_medium=learn-image-classification-cnn-convolutional-neural-networks-3-datasets"&gt;CNN&lt;/a&gt; architecture that we saw in the MNIST example above, you will get a low validation accuracy of around 60%.&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = Sequential()
// convolutional layer
model.add(Conv2D(50, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu', input_shape=(32, 32, 3)))
//convolutional layer
model.add(Conv2D(75, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(125, kernel_size=(3,3), strides=(1,1), padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
// flatten output of conv
model.add(Flatten())
// hidden layer
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(250, activation='relu'))
model.add(Dropout(0.3))
// output layer
model.add(Dense(10, activation='softmax'))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;I add convolutional 2D layers for deeper model&lt;/li&gt;
&lt;li&gt;Also add filters for distinguish more features&lt;/li&gt;
&lt;li&gt;Dropout for regularization ( modified )&lt;/li&gt;
&lt;li&gt;I add more dense layers &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tu_imj----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122498645-741a0c00-d02a-11eb-99d0-7e1c9cd513f8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tu_imj----/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122498645-741a0c00-d02a-11eb-99d0-7e1c9cd513f8.png"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;✔️ &lt;em&gt;In a conclusion , you can see accuracy is increasing. There’s also CIFAR-100 available in Keras that you can use for further practice.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Basic CNN model for Image Classification
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Architecture of the model :&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JZ6rmNxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499312-ad9f4700-d02b-11eb-9ae6-9d4134c17f01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JZ6rmNxG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499312-ad9f4700-d02b-11eb-9ae6-9d4134c17f01.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Fully Connected Neural Network (Keras) :&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F-kuoSNq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499454-eb03d480-d02b-11eb-947b-2517656ce7c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F-kuoSNq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499454-eb03d480-d02b-11eb-947b-2517656ce7c0.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;Just 10 epochs, you can see 94%+ validation accuracy :&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qlZDwGGM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499644-37e7ab00-d02c-11eb-9dbb-bf045d8d582b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qlZDwGGM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122499644-37e7ab00-d02c-11eb-9dbb-bf045d8d582b.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>machinelearning</category>
      <category>computerscience</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Innovative Technology NVIDIA StyleGAN2 </title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Wed, 16 Jun 2021 04:38:18 +0000</pubDate>
      <link>https://dev.to/juyae/innovative-technology-nvidia-stylegan2-1a9n</link>
      <guid>https://dev.to/juyae/innovative-technology-nvidia-stylegan2-1a9n</guid>
      <description>&lt;h2&gt;
  
  
  StyleGAN2 - Official TensorFlow Implementation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Style Based Generator Architecture for Generative Adversarial Networks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Developmental Environment&lt;/em&gt; &lt;br&gt;
&lt;em&gt;I used  deep learning server environment provided by Seoul Women's University.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  64-bit &lt;strong&gt;Python 3.6&lt;/strong&gt; installation. I recommend Anaconda3 with numpy 1.14.3 or newer.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;TensorFlow 1.10.0&lt;/strong&gt; or newer with GPU support.&lt;/li&gt;
&lt;li&gt;  One or more &lt;strong&gt;high-end NVIDIA GPUs&lt;/strong&gt; with at least 11GB of DRAM. We recommend &lt;strong&gt;NVIDIA DGX-1 with 8 Tesla V100 GPUs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  NVIDIA driver 391.35 or newer, &lt;strong&gt;CUDA toolkit 9.0&lt;/strong&gt; or newer, cuDNN 7.3.1 or newer.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Do you think these people are REAL ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1sx3RfQB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122155722-89642e80-cea2-11eb-9219-23135c18f015.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1sx3RfQB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122155722-89642e80-cea2-11eb-9219-23135c18f015.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Answer is NO&lt;/em&gt;. &lt;em&gt;These people are produced by NVIDIA's generator that allows control over different aspects of the image. The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Material related to our paper is available via the following links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Paper: &lt;a href="https://arxiv.org/abs/1812.04948"&gt;https://arxiv.org/abs/1812.04948&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Video: &lt;a href="https://youtu.be/kSLJriaOumA"&gt;https://youtu.be/kSLJriaOumA&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Code: &lt;a href="https://github.com/NVlabs/stylegan"&gt;https://github.com/NVlabs/stylegan&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;FFHQ: &lt;a href="https://github.com/NVlabs/ffhq-dataset"&gt;https://github.com/NVlabs/ffhq-dataset&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1uka3a1noXHAydRPRbknqwKVGODvnmUBX"&gt;StyleGAN&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Main folder.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1v-HkF3Ehrpon7wVIx4r5DLcko_U_V6Lt"&gt;stylegan-paper.pdf&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;High-quality version of the paper PDF.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1uzwkZHQX_9pYg1i0d1Nbe3D9xPO8-qBf"&gt;stylegan-video.mp4&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;High-quality version of the result video.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1-l46akONUWF6LCpDoeq63H53rD7MeiTd"&gt;images&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Example images produced using our generator.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1ToY5P4Vvf5_c3TyUizQ8fckFFoFtBvD8"&gt;representative-images&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;High-quality images to be used in articles, blog posts, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=100DJ0QXyG89HZzB4w2Cbyf4xjNK54cQ1"&gt;100k-generated-images&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;100,000 generated images for different amounts of truncation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=14lm8VRN1pr4g_KVe6_LvyDX1PObst6d4"&gt;ffhq-1024x1024&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Generated using Flickr-Faces-HQ dataset at 1024×1024.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1Vxz9fksw4kgjiHrvHkX4Hze4dyThFW6t"&gt;bedrooms-256x256&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Generated using LSUN Bedroom dataset at 256×256.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1MFCvOMdLE2_mpeLPTiDw5dxc2CRuKkzS"&gt;cars-512x384&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Generated using LSUN Car dataset at 512×384.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1gq-Gj3GRFiyghTPKhp8uDMA9HV_0ZFWQ"&gt;cats-256x256&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Generated using LSUN Cat dataset at 256×256.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1N8pOd_Bf8v89NGUaROdbD8-ayLPgyRRo"&gt;videos&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Example videos produced using our generator.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1NFO7_vH0t98J13ckJYFd7kuaTkyeRJ86"&gt;high-quality-video-clips&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Individual segments of the result video as high-quality MP4.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP"&gt;ffhq-dataset&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Raw data for the &lt;a href="https://github.com/NVlabs/ffhq-dataset"&gt;Flickr-Faces-HQ dataset&lt;/a&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1MASQyN5m0voPcx7-9K0r5gObhvvPups7"&gt;networks&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Pre-trained networks as pickled instances of &lt;a href="//./dnnlib/tflib/network.py"&gt;dnnlib.tflib.Network&lt;/a&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ"&gt;stylegan-ffhq-1024x1024.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;StyleGAN trained with Flickr-Faces-HQ dataset at 1024×1024.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MGqJl28pN4t7SAtSrPdSRJSQJqahkzUf"&gt;stylegan-celebahq-1024x1024.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;StyleGAN trained with CelebA-HQ dataset at 1024×1024.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MOSKeGF0FJcivpBI7s63V9YHloUTORiF"&gt;stylegan-bedrooms-256x256.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;StyleGAN trained with LSUN Bedroom dataset at 256×256.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MJ6iCfNtMIRicihwRorsM3b7mmtmK9c3"&gt;stylegan-cars-512x384.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;StyleGAN trained with LSUN Car dataset at 512×384.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MQywl0FNt6lHu8E_EUqnRbviagS7fbiJ"&gt;stylegan-cats-256x256.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;StyleGAN trained with LSUN Cat dataset at 256×256.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/open?id=1MvYdWCBuMfnoYGptRH-AgKLbPTsIQLhl"&gt;metrics&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Auxiliary networks for the quality and disentanglement metrics.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1MzTY44rLToO5APn8TZmfR7_ENSe5aZUn"&gt;inception_v3_features.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Standard &lt;a href="https://arxiv.org/abs/1512.00567"&gt;Inception-v3&lt;/a&gt; classifier that outputs a raw feature vector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1N2-m9qszOeVC9Tq77WxsLnuWwOedQiD2"&gt;vgg16_zhang_perceptual.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Standard &lt;a href="https://arxiv.org/abs/1801.03924"&gt;LPIPS&lt;/a&gt; metric to estimate perceptual similarity.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://drive.google.com/uc?id=1Q5-AI6TwWhCVM7Muu4tBM7rp5nG_gmCX"&gt;celebahq-classifier-00-male.pkl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Binary classifier trained to detect a single attribute of CelebA-HQ.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  StyleGAN's Pre-Trained Networks
&lt;/h2&gt;

&lt;p&gt;Once the datasets are set up, you can train your own StyleGAN networks as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit &lt;a href="//./train.py"&gt;train.py&lt;/a&gt; to specify the dataset and training configuration by uncommenting or editing specific lines.&lt;/li&gt;
&lt;li&gt;Run the training script with &lt;code&gt;python train.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The results are written to a newly created directory &lt;code&gt;results/&amp;lt;ID&amp;gt;-&amp;lt;DESCRIPTION&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The training may take several days (or weeks) to complete, depending on the configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By default, &lt;code&gt;train.py&lt;/code&gt; is configured to train the highest-quality StyleGAN (configuration F in Table 1) for the FFHQ dataset at 1024×1024 resolution using 8 GPUs. Please note that we have used 8 GPUs in all of our experiments. Training with fewer GPUs may not produce identical results – if you wish to compare against our technique, we strongly recommend using the same number of GPUs.&lt;/p&gt;

&lt;p&gt;Expected training times for the default configuration using Tesla V100 GPUs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;GPUs&lt;/th&gt;
&lt;th&gt;1024×1024&lt;/th&gt;
&lt;th&gt;512×512&lt;/th&gt;
&lt;th&gt;256×256&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;41 days 4 hours&lt;/td&gt;
&lt;td&gt;24 days 21 hours&lt;/td&gt;
&lt;td&gt;14 days 22 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;21 days 22 hours&lt;/td&gt;
&lt;td&gt;13 days 7 hours&lt;/td&gt;
&lt;td&gt;9 days 5 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;11 days 8 hours&lt;/td&gt;
&lt;td&gt;7 days 0 hours&lt;/td&gt;
&lt;td&gt;4 days 21 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;6 days 14 hours&lt;/td&gt;
&lt;td&gt;4 days 10 hours&lt;/td&gt;
&lt;td&gt;3 days 8 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;It takes lot of time for training so I recommend you to charge NVIDIA's Cuda Environment&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Datasets for training
&lt;/h2&gt;

&lt;p&gt;The training and evaluation scripts operate on datasets stored as multi-resolution TFRecords. Each dataset is represented by a directory containing the same image data in several resolutions to enable efficient streaming. There is a separate *.tfrecords file for each resolution, and if the dataset contains labels, they are stored in a separate file as well. By default, the scripts expect to find the datasets at &lt;code&gt;datasets/&amp;lt;NAME&amp;gt;/&amp;lt;NAME&amp;gt;-&amp;lt;RESOLUTION&amp;gt;.tfrecords&lt;/code&gt;. The directory can be changed by editing &lt;a href="//./config.py"&gt;config.py&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;result_dir = 'results'
data_dir = 'datasets'
cache_dir = 'cache'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To obtain the FFHQ dataset (&lt;code&gt;datasets/ffhq&lt;/code&gt;), please refer to the &lt;a href="https://github.com/NVlabs/ffhq-dataset"&gt;Flickr-Faces-HQ repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To obtain the CelebA-HQ dataset (&lt;code&gt;datasets/celebahq&lt;/code&gt;), please refer to the &lt;a href="https://github.com/tkarras/progressive_growing_of_gans"&gt;Progressive GAN repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To obtain other datasets, including LSUN, please consult their corresponding project pages. The datasets can be converted to multi-resolution TFRecords using the provided &lt;a href="//./dataset_tool.py"&gt;dataset_tool.py&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; python dataset_tool.py create_lsun datasets/lsun-bedroom-full ~/lsun/bedroom_lmdb --resolution 256
&amp;gt; python dataset_tool.py create_lsun_wide datasets/lsun-car-512x384 ~/lsun/car_lmdb --width 512 --height 384
&amp;gt; python dataset_tool.py create_lsun datasets/lsun-cat-full ~/lsun/cat_lmdb --resolution 256
&amp;gt; python dataset_tool.py create_cifar10 datasets/cifar10 ~/cifar10
&amp;gt; python dataset_tool.py create_from_images datasets/custom-dataset ~/custom-images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Disentanglement &amp;amp; Evaluating Quality
&lt;/h2&gt;

&lt;p&gt;The quality and disentanglement metrics used in our paper can be evaluated using &lt;a href="//./run_metrics.py"&gt;run_metrics.py&lt;/a&gt;. By default, the script will evaluate the Fréchet Inception Distance (&lt;code&gt;fid50k&lt;/code&gt;) for the pre-trained FFHQ generator and write the results into a newly created directory under &lt;code&gt;results&lt;/code&gt;. The exact behavior can be changed by uncommenting or editing specific lines in &lt;a href="//./run_metrics.py"&gt;run_metrics.py&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Expected evaluation time and results for the pre-trained FFHQ generator using one Tesla V100 GPU:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;fid50k&lt;/td&gt;
&lt;td&gt;16 min&lt;/td&gt;
&lt;td&gt;4.4159&lt;/td&gt;
&lt;td&gt;Fréchet Inception Distance using 50,000 images.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ppl_zfull&lt;/td&gt;
&lt;td&gt;55 min&lt;/td&gt;
&lt;td&gt;664.8854&lt;/td&gt;
&lt;td&gt;Perceptual Path Length for full paths in &lt;em&gt;Z&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ppl_wfull&lt;/td&gt;
&lt;td&gt;55 min&lt;/td&gt;
&lt;td&gt;233.3059&lt;/td&gt;
&lt;td&gt;Perceptual Path Length for full paths in &lt;em&gt;W&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ppl_zend&lt;/td&gt;
&lt;td&gt;55 min&lt;/td&gt;
&lt;td&gt;666.1057&lt;/td&gt;
&lt;td&gt;Perceptual Path Length for path endpoints in &lt;em&gt;Z&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ppl_wend&lt;/td&gt;
&lt;td&gt;55 min&lt;/td&gt;
&lt;td&gt;197.2266&lt;/td&gt;
&lt;td&gt;Perceptual Path Length for path endpoints in &lt;em&gt;W&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ls&lt;/td&gt;
&lt;td&gt;10 hours&lt;/td&gt;
&lt;td&gt;z: 165.0106&lt;br&gt;w: 3.7447&lt;/td&gt;
&lt;td&gt;Linear Separability in &lt;em&gt;Z&lt;/em&gt; and &lt;em&gt;W&lt;/em&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Training curves for FFHQ config F (StyleGAN2) compared to original StyleGAN using 8 GPUs:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/NVlabs/stylegan2/blob/master/docs/stylegan2-training-curves.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lQf0Vy-B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://github.com/NVlabs/stylegan2/raw/master/docs/stylegan2-training-curves.png" alt="Training curves"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Path Length Regularization to smooth latent space
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8mLEh-fZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122158106-07c2cf80-cea7-11eb-84a4-36f56116aba1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8mLEh-fZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122158106-07c2cf80-cea7-11eb-84a4-36f56116aba1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;StyleGAN2 was my &lt;em&gt;CAPSTONE DESIGN FINAL PROJECT&lt;/em&gt;. Next time I will post GAN basic, StyleGAN2 proposed in &lt;em&gt;NVIDIA&lt;/em&gt; paper. Priority to main metrics and normalization. See you then! &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/1812.04948"&gt;https://arxiv.org/abs/1812.04948&lt;/a&gt;  (A Style-Based Generator Architecture for Generative Adversarial Networks)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/1912.04958.pdf"&gt;https://arxiv.org/pdf/1912.04958.pdf&lt;/a&gt;  (Analyzing and Improving the Image Quality of StyleGAN)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/analytics-vidhya/from-gan-basic-to-stylegan2-680add7abe82"&gt;https://medium.com/analytics-vidhya/from-gan-basic-to-stylegan2-680add7abe82&lt;/a&gt;  (From GAN basic to StyleGAN2)&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>python</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Coroutine : Light Weight Threads for Parallel Programing</title>
      <dc:creator>Juyae </dc:creator>
      <pubDate>Wed, 16 Jun 2021 03:20:35 +0000</pubDate>
      <link>https://dev.to/juyae/coroutine-ca6</link>
      <guid>https://dev.to/juyae/coroutine-ca6</guid>
      <description>&lt;h2&gt;
  
  
  Coroutine
&lt;/h2&gt;

&lt;p&gt;Coroutine은 Thread/ AsyncTask/Rx 작업을 대신할 수 있는 Asynchronous/Non-Blocking Programming을 제공하는 &lt;em&gt;Light-Weight Threads&lt;/em&gt; 로 가볍고 유연한 병렬 프로그래밍을 위한 기술 &lt;/p&gt;

&lt;h2&gt;
  
  
  Kotlin 코루틴으로 앱 성능 향상
&lt;/h2&gt;

&lt;p&gt;코루틴을 왜 사용할까 ? 크게 5가지로 알아보자 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.비동기 코드 작성&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;네트워크 호출이나 디스크 작업과 같은 장기 실행 작업을 관리하면서 앱의 응답성을 유지하는 깔끔하고 간소화된 비동기 코드 작성 가능 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2.장기 실행 작업 관리&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;장기 실행 작업을 처리하는 두 작업을 추가하여 일반 함수를 기반으로 빌드 &lt;/li&gt;
&lt;li&gt;invoke 및 return 외에도 suspend 및 resume 추가 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.스택 프레임&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;로컬 변수와 함께 실행 중인 함수 관리 &lt;/li&gt;
&lt;li&gt;Coroutine 정지하면 현재 스택 프레임이 복사되고 저장되어 재개될 시 저장된 위치에서 다시 복사되고 실행 &lt;/li&gt;
&lt;li&gt;일반적인 순차 차단 요청처럼 보일 수도 있지만 네트워크 요청이 기본 스레드를 차단하지 않도록 함 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.기본 안전&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kotlin에서 모든 코루틴은 기본 스레드에서 실행 중인 경우에도 디스패처에서 실행 &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kotlin에서 자체적 정지될 수 있으며 디스패처는 Coroutine 재개를 담당 &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Dispatchers.Main&lt;/strong&gt;  - 이 디스패처를 사용하여 기본 Android 스레드에서 코루틴을 실행합니다. 이 디스패처는 UI와 상호작용하고 빠른 작업을 실행하기 위해서만 사용해야 합니다. 예를 들어  &lt;code&gt;suspend&lt;/code&gt;  함수를 호출하고 Android UI 프레임워크 작업을 실행하며  &lt;a href="https://developer.android.com/topic/libraries/architecture/livedata?hl=ko"&gt;&lt;code&gt;LiveData&lt;/code&gt;&lt;/a&gt;  객체를 업데이트함 &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dispatchers.IO&lt;/strong&gt;  - 이 디스패처는 기본 스레드 외부에서 디스크 또는 네트워크 I/O를 실행하도록 최적화되어 있음 예를 들어  &lt;a href="https://developer.android.com/topic/libraries/architecture/room?hl=ko"&gt;회의실 구성요소&lt;/a&gt;를 사용하고 파일에서 읽거나 파일에 쓰며 네트워크 작업을 실행&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Dispatchers.Default&lt;/strong&gt;  - 이 디스패처는 CPU를 많이 사용하는 작업을 기본 스레드 외부에서 실행하도록 최적화되어 있습니다. 예를 들어 목록을 정렬하고 JSON을 파싱&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.Background Task가 필요한 대표적인 경우&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;네트워크 Request (Retrofit, OkHttp3, UrlConnection 등) &lt;/li&gt;
&lt;li&gt;내부 DataBase (Room, SQLite 등) &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Thread와 차이점
&lt;/h2&gt;

&lt;p&gt;Thread &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;OS의 Native Thread에 직접 링크되어 동작하므로 많은 시스템 자원을 사용&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Thread간 전환 시에도 CPU의 상태 체크가 필요하므로 그만큼의 비용이 발생&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MALvycjY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122152297-09d36100-ce9c-11eb-9c29-151bce0c4cfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MALvycjY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122152297-09d36100-ce9c-11eb-9c29-151bce0c4cfd.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Coroutine&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Coroutine은 즉시 실행하는게 아니며, Thread와 다르게 OS의 영향을 받지않기 때문에 그만큼의 비용 절약&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Coroutine 전환 시 Context Switch가 발생하지 않음&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;개발자가 직접 루틴을 언제 실행할지, 언제 종료할지 모두 지정이 가능&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;이렇게 생성한 루틴은 작업 전환 시에 시스템의 영향을 받지 않아, 그에 따른 비용이 발생하지 않음&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kt6VWE3r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122152391-2d96a700-ce9c-11eb-939e-13610472feea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kt6VWE3r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122152391-2d96a700-ce9c-11eb-939e-13610472feea.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CoroutineScope&amp;amp; GlobalScope&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CoroutineScope는 코루틴의 범위, 블록을 묶음으로 제어할 수 있는 단위 &lt;/li&gt;
&lt;li&gt;GlobalScope는 CoroutineScope의 한 종류로 Top-level Coroutine, Application의 생명주기에 종속적으로 존재 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CoroutineContext&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CoroutineContext는 코루틴을 어떻게 처리할 것인지에 대한 정보 집합 , 주요 요소로는 Job &amp;amp; Dispatcher 존재 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dispatcher&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dispatcher는 CoroutineContext의 주요 요소&lt;/li&gt;
&lt;li&gt;위에서 말했던 CoroutineContext를 상속 받아 어떤 스레드를 어떻게 동작할 것인가에 대한 정의 &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Coroutine 권장사항
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;디스패처 삽입&lt;/strong&gt;&lt;br&gt;
새 코루틴을 만들거나 withContext 를 호출할 때 Dispatchers 를 하드코딩 하지 말 것&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// DO inject Dispatchersclass NewsRepository(    private val defaultDispatcher: CoroutineDispatcher = Dispatchers.Default) {    suspend fun loadNews() = withContext(defaultDispatcher) { /* ... */ }}// DO NOT hardcode Dispatchersclass NewsRepository {    // DO NOT use Dispatchers.Default directly, inject it instead    suspend fun loadNews() = withContext(Dispatchers.Default) { /* ... */ }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;정지 함수는 기본 스레드에서 호출하기에 안전해야 함&lt;/strong&gt;&lt;br&gt;
정지 함수는 기본 스레드에서 호출하기에 안전한 기본 안전 함수여야 함 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vK6igj8Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122151195-15258d00-ce9a-11eb-9bdf-20f688540345.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vK6igj8Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122151195-15258d00-ce9a-11eb-9bdf-20f688540345.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;다음과 같은 패턴을 통해 앱의 &lt;strong&gt;확장 가능성&lt;/strong&gt;이 높아지는데 &lt;br&gt;
이는 정지 함수를 호출하는 클래스가 작업 유형에 어떤 Dispatcher 를 사용할 지 걱정할 필요가 없음 . 작업을 실행하는 클래스에 책임이 있음 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ViewModel은 코루틴을 만들어야 함&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;뷰는 코루틴을 직접 트리거하여 비즈니스 로직을 실행하면 안 되고 이 책임을 ViewModel에 맡겨 비즈니스 로직을 더 쉽게 테스트 가능&lt;/li&gt;
&lt;li&gt;또한 작업이 viewModelScope에서 시작되면 코루틴이 구성 변경에도 자동 유지 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;예외에 주의&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;코루틴에서 발생하는 예외를 잘못 처리하면 앱이 &lt;strong&gt;비정상 종료&lt;/strong&gt; 될 수 있음
 &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UdKm_qiR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://user-images.githubusercontent.com/58849278/122152025-90d40980-ce9b-11eb-8059-b516f28f36c9.png"&gt;
따라서 다음과 같은 코드로 예외를 포착&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;위의 내용을 정리하면서 코루틴에 대한 전반적인 이론을 알아봤다. &lt;br&gt;
다음 글에서는 코루틴을 적용한 예시 코드로 실습하려고 한다. &lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>android</category>
    </item>
  </channel>
</rss>
