<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: RabbitQ</title>
    <description>The latest articles on DEV Community by RabbitQ (@ehottl).</description>
    <link>https://dev.to/ehottl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ehottl"/>
    <language>en</language>
    <item>
      <title>파이썬으로 Epitope binning 하기</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Sun, 23 Feb 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ehottl/paisseoneuro-epitope-binning-hagi-1ch1</link>
      <guid>https://dev.to/ehottl/paisseoneuro-epitope-binning-hagi-1ch1</guid>
      <description>&lt;p&gt;치료용 단클론 항체(mAbs)는 바이오의약품 시장의 70% 이상을 차지하며 지속적으로 성장하고 있습니다. 항체 개발 초기 단계에서 치료제 및 진단 도구로 사용하기 위해 적절한 특성을 가진 후보를 선별하는 것이 중요합니다. 에피토프 빈닝은 mAbs가 표적 단백질(항원)에 결합하는 특성을 파악하는 방법입니다. 이 과정에서 동일한 표적 단백질에 특이적인 mAbs를 쌍으로 테스트하여 항원의 특정 부위에 대한 결합을 서로 차단하는지 여부를 평가합니다. 같은 에피토프에 대한 결합을 차단하는 mAbs는 함께 “빈”으로 분류됩니다. 같은 빈에 속한 mAbs는 종종 유사한 기능을 하므로 에피토프 빈을 통해 후보 항체의 다양성을 확인 할 수 있습니다. 에피토프 다양성은 지적 재산권 보호를 확대하는 데도 중요합니다. 예를 들면 항체들이 같은 항원에 결합하더라도 작용 메커니즘이 다를 수 있는데 이는 일부 암과 감염성 질환 치료에 중요하기 때문입니다.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;에피토프 빈닝은 에피토프 매핑과 혼동해서는 안 됩니다. 에피토프 매핑에서는 항원의 개별 단편에 대한 항체 결합을 테스트하여 항체가 결합하는 항원의 특정 에피토프를 정의합니다.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SPR을 이용한 에피토프 빈닝의 주요 장점은 항원과 소량의 정제된 항체만 있으면 테스트할 수 있다는 것입니다. SPR을 통한 에피토프 비닝의 원리를 간략하게 설명하면 다음과 같습니다. 첫번째 항체를 고정시켜 놓고 항원과 두번째 항체를 넣어서 RU값을 측정하는데, 에피토프가 겹치지 않는 경우에 RU 값이 높게 측정됩니다. 즉, 에피토프가 비슷한 경우는 RU 값이 낮게 측정되는 것입니다. 이제 SPR을 통해 얻은 epitope binning 데이터를 파이썬으로 분석해서 어떤 항체 커뮤니티가 있는지 식별해보도록 하겠습니다.&lt;/p&gt;

&lt;h1&gt;
&lt;span&gt;1&lt;/span&gt; 데이터 준비&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://doi.org/10.1093/abt/tbaa016" rel="noopener noreferrer"&gt;Tom Z Yuan et al&lt;/a&gt;에서 공개한 데이터를 가지고 시작해보겠습니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb1-1"&gt;&lt;span&gt;import&lt;/span&gt; pandas &lt;span&gt;as&lt;/span&gt; pd&lt;/span&gt;
&lt;span id="cb1-2"&gt;&lt;/span&gt;
&lt;span id="cb1-3"&gt;&lt;span&gt;# 데이터 불러오기(원본 데이터는 wide 형태의 데이터 입니다.)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-4"&gt;df &lt;span&gt;=&lt;/span&gt; pd.read_csv(&lt;span&gt;"../data/input/EpitopeBinning.csv"&lt;/span&gt;, index_col&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-5"&gt;&lt;/span&gt;
&lt;span id="cb1-6"&gt;&lt;span&gt;# wide데이터를 tidy 형태로 변환&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-7"&gt;tidy_df &lt;span&gt;=&lt;/span&gt; df.reset_index().melt(&lt;/span&gt;
&lt;span id="cb1-8"&gt; id_vars&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"mAb ID"&lt;/span&gt;, var_name&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Antigen"&lt;/span&gt;, value_name&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Binding Value"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-9"&gt;)&lt;/span&gt;
&lt;span id="cb1-10"&gt;tidy_df.columns &lt;span&gt;=&lt;/span&gt; [&lt;span&gt;"First_ab"&lt;/span&gt;, &lt;span&gt;"Second_ab"&lt;/span&gt;, &lt;span&gt;"Binding"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb1-11"&gt;tidy_df &lt;span&gt;=&lt;/span&gt; tidy_df.set_index(&lt;span&gt;"First_ab"&lt;/span&gt;).sort_values(by&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Binding"&lt;/span&gt;, ascending&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-12"&gt;&lt;span&gt;# 변환한 데이터 저장하기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-13"&gt;&lt;/span&gt;
&lt;span id="cb1-14"&gt;&lt;span&gt;# 히트맵을 그리기 위해 다시 wide로 만들기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-15"&gt;wide_data &lt;span&gt;=&lt;/span&gt; tidy_df.pivot_table(&lt;/span&gt;
&lt;span id="cb1-16"&gt; index&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"First_ab"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb1-17"&gt; columns&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Second_ab"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb1-18"&gt; values&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Binding"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb1-19"&gt; aggfunc&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"mean"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb1-20"&gt;)&lt;/span&gt;
&lt;span id="cb1-21"&gt;wide_data.head()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Second_ab&lt;/th&gt;
&lt;th&gt;ADI-15734&lt;/th&gt;
&lt;th&gt;ADI-15741&lt;/th&gt;
&lt;th&gt;ADI-15742&lt;/th&gt;
&lt;th&gt;ADI-15743&lt;/th&gt;
&lt;th&gt;ADI-15751&lt;/th&gt;
&lt;th&gt;ADI-15757&lt;/th&gt;
&lt;th&gt;ADI-15767&lt;/th&gt;
&lt;th&gt;ADI-15776&lt;/th&gt;
&lt;th&gt;ADI-15779&lt;/th&gt;
&lt;th&gt;ADI-15782&lt;/th&gt;
&lt;th&gt;...&lt;/th&gt;
&lt;th&gt;ADI-16003&lt;/th&gt;
&lt;th&gt;ADI-16017&lt;/th&gt;
&lt;th&gt;ADI-16025&lt;/th&gt;
&lt;th&gt;ADI-16031&lt;/th&gt;
&lt;th&gt;ADI-16032&lt;/th&gt;
&lt;th&gt;ADI-16047&lt;/th&gt;
&lt;th&gt;ADI-16050&lt;/th&gt;
&lt;th&gt;FVM09&lt;/th&gt;
&lt;th&gt;KZ52&lt;/th&gt;
&lt;th&gt;mab100&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;First_ab&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ADI-15734&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADI-15741&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADI-15742&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADI-15743&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADI-15751&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;5 rows × 54 columns&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb2-1"&gt;tidy_df.describe()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Binding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;count&lt;/td&gt;
&lt;td&gt;2563.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mean&lt;/td&gt;
&lt;td&gt;1.304331&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;std&lt;/td&gt;
&lt;td&gt;0.870324&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;min&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;td&gt;1.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;50%&lt;/td&gt;
&lt;td&gt;1.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;75%&lt;/td&gt;
&lt;td&gt;2.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;max&lt;/td&gt;
&lt;td&gt;6.000000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;위의 결과를 통해 데이터의 형태를 파악할 수 있습니다. 총 2563개가 있고 최대값과 최소값을 보아하니 실제 실험 값이 아닌 일종의 후처리가된 값으로 구성되어 있음을 알 수 있습니다.&lt;/p&gt;

&lt;h1&gt;
&lt;span&gt;2&lt;/span&gt; 시각화&lt;/h1&gt;

&lt;h2&gt;
&lt;span&gt;2.1&lt;/span&gt; 히트맵&lt;/h2&gt;

&lt;p&gt;히트맵은 차단, 비차단 및 불확실한 항체 쌍에 대한 빠른 개요를 제공합니다. 이를 통해 히트맵 내 데이터를 간편하게 검사할 수 있으며 다음과 같은 이점을 제공합니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;직관적 이해: 복잡한 데이터를 색상 코드로 표현하여 한눈에 파악할 수 있습니다.&lt;/li&gt;
&lt;li&gt;패턴 식별: 대량의 데이터에서 패턴이나 트렌드를 쉽게 발견할 수 있습니다.&lt;/li&gt;
&lt;li&gt;유연한 분석: 컷오프 값을 조정함으로써 다양한 조건에서 데이터를 분석할 수 있습니다.&lt;/li&gt;
&lt;li&gt;효율적인 데이터 해석: 많은 양의 정보를 압축된 형태로 표현하여 빠른 의사결정을 돕습니다.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb3-1"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.colors &lt;span&gt;as&lt;/span&gt; mcolors&lt;/span&gt;
&lt;span id="cb3-2"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb3-3"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb3-4"&gt;&lt;span&gt;import&lt;/span&gt; seaborn &lt;span&gt;as&lt;/span&gt; sns&lt;/span&gt;
&lt;span id="cb3-5"&gt;&lt;/span&gt;
&lt;span id="cb3-6"&gt;&lt;span&gt;# Nord Aurora 색상&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-7"&gt;&lt;span&gt;# A3BE8C: Nord8 (녹색)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-8"&gt;&lt;span&gt;# EBCB8B: Nord9 (노란색)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-9"&gt;&lt;span&gt;# BF616A: Nord11 (빨간색)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-10"&gt;aurora_colors &lt;span&gt;=&lt;/span&gt; [&lt;span&gt;"#BF616A"&lt;/span&gt;, &lt;span&gt;"#EBCB8B"&lt;/span&gt;, &lt;span&gt;"#A3BE8C"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb3-11"&gt;&lt;/span&gt;
&lt;span id="cb3-12"&gt;&lt;span&gt;# 사용자 정의 색상 맵 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-13"&gt;cmap &lt;span&gt;=&lt;/span&gt; sns.color_palette(aurora_colors)&lt;/span&gt;
&lt;span id="cb3-14"&gt;&lt;/span&gt;
&lt;span id="cb3-15"&gt;&lt;span&gt;# 마스크 생성: NaN은 True, 나머지는 False&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-16"&gt;mask &lt;span&gt;=&lt;/span&gt; np.isnan(wide_data)&lt;/span&gt;
&lt;span id="cb3-17"&gt;&lt;/span&gt;
&lt;span id="cb3-18"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;10&lt;/span&gt;, &lt;span&gt;10&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb3-19"&gt;sns.heatmap(&lt;/span&gt;
&lt;span id="cb3-20"&gt; df,&lt;/span&gt;
&lt;span id="cb3-21"&gt; cmap&lt;span&gt;=&lt;/span&gt;cmap,&lt;/span&gt;
&lt;span id="cb3-22"&gt; annot&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb3-23"&gt; linewidths&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb3-24"&gt; mask&lt;span&gt;=&lt;/span&gt;mask,&lt;/span&gt;
&lt;span id="cb3-25"&gt; vmin&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;# 최소값을 0으로 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-26"&gt; vmax&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;, &lt;span&gt;# 최대값을 1로 설정 (필요에 따라 조정)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-27"&gt; center&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.5&lt;/span&gt;, &lt;span&gt;# 중간값을 0.5로 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-28"&gt; cbar_kws&lt;span&gt;=&lt;/span&gt;{&lt;span&gt;"label"&lt;/span&gt;: &lt;span&gt;"Value"&lt;/span&gt;},&lt;/span&gt;
&lt;span id="cb3-29"&gt; cbar&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb3-30"&gt; square&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb3-31"&gt;)&lt;/span&gt;
&lt;span id="cb3-32"&gt;&lt;/span&gt;
&lt;span id="cb3-33"&gt;plt.title(&lt;span&gt;""&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-34"&gt;plt.xlabel(&lt;span&gt;""&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-35"&gt;plt.ylabel(&lt;span&gt;""&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-36"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_Epitope_binning_files/figure-html/cell-4-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_Epitope_binning_files%2Ffigure-html%2Fcell-4-output-1.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;위 히트맵 결과를 통해 크게 4개의 클러스터가 존재하고 있다는 것을 쉽게 유추할 수 있습니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;2.2&lt;/span&gt; KNN 클러스터링 및 네트워크 시각화&lt;/h2&gt;

&lt;p&gt;K-Nearest Neighbors (KNN) 클러스터링은 에피토프 빈닝 데이터를 분석하고 시각화하는 데 유용한 방법으로 히트맵보다 더 명료한 결과를 보여줍니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb4-1"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.colors &lt;span&gt;as&lt;/span&gt; mcolors&lt;/span&gt;
&lt;span id="cb4-2"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb4-3"&gt;&lt;span&gt;import&lt;/span&gt; networkx &lt;span&gt;as&lt;/span&gt; nx&lt;/span&gt;
&lt;span id="cb4-4"&gt;&lt;span&gt;from&lt;/span&gt; adjustText &lt;span&gt;import&lt;/span&gt; adjust_text&lt;/span&gt;
&lt;span id="cb4-5"&gt;&lt;span&gt;from&lt;/span&gt; sklearn.cluster &lt;span&gt;import&lt;/span&gt; KMeans&lt;/span&gt;
&lt;span id="cb4-6"&gt;&lt;span&gt;from&lt;/span&gt; sklearn.preprocessing &lt;span&gt;import&lt;/span&gt; StandardScaler&lt;/span&gt;
&lt;span id="cb4-7"&gt;&lt;/span&gt;
&lt;span id="cb4-8"&gt;&lt;span&gt;# 데이터 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-9"&gt;data &lt;span&gt;=&lt;/span&gt; tidy_df.reset_index()&lt;/span&gt;
&lt;span id="cb4-10"&gt;&lt;/span&gt;
&lt;span id="cb4-11"&gt;&lt;span&gt;# NaN 값을 포함한 행 제거 및 First_ab와 Second_ab가 동일한 행 제거&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-12"&gt;data_clean &lt;span&gt;=&lt;/span&gt; data.dropna().query(&lt;span&gt;"First_ab != Second_ab"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb4-13"&gt;&lt;/span&gt;
&lt;span id="cb4-14"&gt;&lt;span&gt;# 방향성 그래프 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-15"&gt;G &lt;span&gt;=&lt;/span&gt; nx.DiGraph()&lt;/span&gt;
&lt;span id="cb4-16"&gt;&lt;span&gt;for&lt;/span&gt; _, row &lt;span&gt;in&lt;/span&gt; data_clean.iterrows():&lt;/span&gt;
&lt;span id="cb4-17"&gt; G.add_edge(row[&lt;span&gt;"First_ab"&lt;/span&gt;], row[&lt;span&gt;"Second_ab"&lt;/span&gt;], weight&lt;span&gt;=&lt;/span&gt;row[&lt;span&gt;"Binding"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb4-18"&gt;&lt;/span&gt;
&lt;span id="cb4-19"&gt;&lt;span&gt;# 인접 행렬 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-20"&gt;adj_matrix &lt;span&gt;=&lt;/span&gt; nx.to_numpy_array(G)&lt;/span&gt;
&lt;span id="cb4-21"&gt;&lt;/span&gt;
&lt;span id="cb4-22"&gt;&lt;span&gt;# 특성 스케일링&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-23"&gt;scaler &lt;span&gt;=&lt;/span&gt; StandardScaler()&lt;/span&gt;
&lt;span id="cb4-24"&gt;adj_matrix_scaled &lt;span&gt;=&lt;/span&gt; scaler.fit_transform(adj_matrix)&lt;/span&gt;
&lt;span id="cb4-25"&gt;&lt;/span&gt;
&lt;span id="cb4-26"&gt;&lt;span&gt;# KMeans 클러스터링&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-27"&gt;n_clusters &lt;span&gt;=&lt;/span&gt; &lt;span&gt;4&lt;/span&gt; &lt;span&gt;# 클러스터 수 고정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-28"&gt;kmeans &lt;span&gt;=&lt;/span&gt; KMeans(n_clusters&lt;span&gt;=&lt;/span&gt;n_clusters, random_state&lt;span&gt;=&lt;/span&gt;&lt;span&gt;420&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb4-29"&gt;cluster_labels &lt;span&gt;=&lt;/span&gt; kmeans.fit_predict(adj_matrix_scaled)&lt;/span&gt;
&lt;span id="cb4-30"&gt;&lt;/span&gt;
&lt;span id="cb4-31"&gt;&lt;span&gt;# 클러스터 정보 저장&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-32"&gt;partition &lt;span&gt;=&lt;/span&gt; &lt;span&gt;dict&lt;/span&gt;(&lt;span&gt;zip&lt;/span&gt;(G.nodes(), cluster_labels))&lt;/span&gt;
&lt;span id="cb4-33"&gt;&lt;/span&gt;
&lt;span id="cb4-34"&gt;&lt;span&gt;# 클러스터 중심 계산&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-35"&gt;cluster_centers &lt;span&gt;=&lt;/span&gt; kmeans.cluster_centers_&lt;/span&gt;
&lt;span id="cb4-36"&gt;&lt;/span&gt;
&lt;span id="cb4-37"&gt;&lt;/span&gt;
&lt;span id="cb4-38"&gt;&lt;span&gt;# 노드 위치 조정 함수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-39"&gt;&lt;span&gt;def&lt;/span&gt; adjust_positions(pos, partition, cluster_centers):&lt;/span&gt;
&lt;span id="cb4-40"&gt; new_pos &lt;span&gt;=&lt;/span&gt; {}&lt;/span&gt;
&lt;span id="cb4-41"&gt; &lt;span&gt;for&lt;/span&gt; node, position &lt;span&gt;in&lt;/span&gt; pos.items():&lt;/span&gt;
&lt;span id="cb4-42"&gt; cluster &lt;span&gt;=&lt;/span&gt; partition[node]&lt;/span&gt;
&lt;span id="cb4-43"&gt; center &lt;span&gt;=&lt;/span&gt; cluster_centers[cluster][:&lt;span&gt;2&lt;/span&gt;] &lt;span&gt;# 2D 좌표만 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-44"&gt; &lt;span&gt;# 노드를 클러스터 중심 방향으로 이동&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-45"&gt; new_pos[node] &lt;span&gt;=&lt;/span&gt; position &lt;span&gt;*&lt;/span&gt; &lt;span&gt;0.3&lt;/span&gt; &lt;span&gt;+&lt;/span&gt; center &lt;span&gt;*&lt;/span&gt; &lt;span&gt;0.7&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-46"&gt; &lt;span&gt;return&lt;/span&gt; new_pos&lt;/span&gt;
&lt;span id="cb4-47"&gt;&lt;/span&gt;
&lt;span id="cb4-48"&gt;&lt;/span&gt;
&lt;span id="cb4-49"&gt;&lt;span&gt;# 그래프 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-50"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;8&lt;/span&gt;, &lt;span&gt;8&lt;/span&gt;)) &lt;span&gt;# 그림 크기를 더 크게 조정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-51"&gt;pos &lt;span&gt;=&lt;/span&gt; nx.spring_layout(G, k&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.5&lt;/span&gt;, iterations&lt;span&gt;=&lt;/span&gt;&lt;span&gt;50&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb4-52"&gt;pos &lt;span&gt;=&lt;/span&gt; adjust_positions(pos, partition, cluster_centers)&lt;/span&gt;
&lt;span id="cb4-53"&gt;&lt;/span&gt;
&lt;span id="cb4-54"&gt;&lt;span&gt;# 노드 색상 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-55"&gt;colors &lt;span&gt;=&lt;/span&gt; [partition[node] &lt;span&gt;for&lt;/span&gt; node &lt;span&gt;in&lt;/span&gt; G.nodes()]&lt;/span&gt;
&lt;span id="cb4-56"&gt;&lt;/span&gt;
&lt;span id="cb4-57"&gt;&lt;span&gt;# Nord Aurora 색상 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-58"&gt;aurora_colors &lt;span&gt;=&lt;/span&gt; [&lt;span&gt;"#A3BE8C"&lt;/span&gt;, &lt;span&gt;"#EBCB8B"&lt;/span&gt;, &lt;span&gt;"#D08770"&lt;/span&gt;, &lt;span&gt;"#BF616A"&lt;/span&gt;, &lt;span&gt;"#B48EAD"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb4-59"&gt;&lt;/span&gt;
&lt;span id="cb4-60"&gt;&lt;span&gt;# Aurora 색상으로 ColorMap 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-61"&gt;aurora_cmap &lt;span&gt;=&lt;/span&gt; mcolors.ListedColormap(aurora_colors)&lt;/span&gt;
&lt;span id="cb4-62"&gt;&lt;/span&gt;
&lt;span id="cb4-63"&gt;&lt;span&gt;# 노드 그리기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-64"&gt;nx.draw_networkx_nodes(G, pos, node_color&lt;span&gt;=&lt;/span&gt;colors, node_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;, cmap&lt;span&gt;=&lt;/span&gt;aurora_cmap)&lt;/span&gt;
&lt;span id="cb4-65"&gt;&lt;/span&gt;
&lt;span id="cb4-66"&gt;&lt;span&gt;# 엣지 그리기 (방향성과 가중치 반영)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-67"&gt;edge_weights &lt;span&gt;=&lt;/span&gt; [G[u][v][&lt;span&gt;"weight"&lt;/span&gt;] &lt;span&gt;for&lt;/span&gt; u, v &lt;span&gt;in&lt;/span&gt; G.edges()]&lt;/span&gt;
&lt;span id="cb4-68"&gt;max_weight &lt;span&gt;=&lt;/span&gt; &lt;span&gt;max&lt;/span&gt;(edge_weights)&lt;/span&gt;
&lt;span id="cb4-69"&gt;edge_widths &lt;span&gt;=&lt;/span&gt; [&lt;span&gt;1&lt;/span&gt; &lt;span&gt;+&lt;/span&gt; &lt;span&gt;3&lt;/span&gt; &lt;span&gt;*&lt;/span&gt; (w &lt;span&gt;/&lt;/span&gt; max_weight) &lt;span&gt;for&lt;/span&gt; w &lt;span&gt;in&lt;/span&gt; edge_weights]&lt;/span&gt;
&lt;span id="cb4-70"&gt;&lt;/span&gt;
&lt;span id="cb4-71"&gt;nx.draw_networkx_edges(&lt;/span&gt;
&lt;span id="cb4-72"&gt; G,&lt;/span&gt;
&lt;span id="cb4-73"&gt; pos,&lt;/span&gt;
&lt;span id="cb4-74"&gt; alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.3&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-75"&gt; edge_color&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"lightgray"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-76"&gt; &lt;span&gt;# width=edge_widths,&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-77"&gt; arrows&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-78"&gt; arrowsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;10&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-79"&gt;)&lt;/span&gt;
&lt;span id="cb4-80"&gt;&lt;/span&gt;
&lt;span id="cb4-81"&gt;&lt;span&gt;# 라벨 위치 조정을 위한 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-82"&gt;texts &lt;span&gt;=&lt;/span&gt; []&lt;/span&gt;
&lt;span id="cb4-83"&gt;&lt;span&gt;for&lt;/span&gt; node, (x, y) &lt;span&gt;in&lt;/span&gt; pos.items():&lt;/span&gt;
&lt;span id="cb4-84"&gt; texts.append(plt.text(x, y, node, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;8&lt;/span&gt;, ha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"center"&lt;/span&gt;, va&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"center"&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb4-85"&gt;&lt;/span&gt;
&lt;span id="cb4-86"&gt;&lt;span&gt;# 라벨 위치 자동 조정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-87"&gt;adjust_text(texts, arrowprops&lt;span&gt;=&lt;/span&gt;{&lt;/span&gt;
&lt;span id="cb4-88"&gt; &lt;span&gt;"arrowstyle"&lt;/span&gt;:&lt;span&gt;"-"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-89"&gt; &lt;span&gt;"color"&lt;/span&gt;:&lt;span&gt;"gray"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-90"&gt; &lt;span&gt;"lw"&lt;/span&gt;:&lt;span&gt;0.5&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-91"&gt; })&lt;/span&gt;
&lt;span id="cb4-92"&gt;&lt;/span&gt;
&lt;span id="cb4-93"&gt;plt.title(&lt;span&gt;"Directed Graph with Weighted Edges"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb4-94"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb4-95"&gt;plt.tight_layout()&lt;/span&gt;
&lt;span id="cb4-96"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_Epitope_binning_files/figure-html/cell-5-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_Epitope_binning_files%2Ffigure-html%2Fcell-5-output-1.png" width="790" height="790"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;위의 결과를 통해 총 4개의 항체의 클러스터를 확인 할 수 있었고 히트맵 결과와 유사함이 확인되었습니다.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;3&lt;/span&gt; 마치며&lt;/h1&gt;

&lt;p&gt;에피토프 빈닝은 항체의 결합 특성을 파악하고 다양한 에피토프를 표적으로 하는 항체를 선별할 수 있습니다. 히트맵과 KNN 클러스터링 등의 시각화 방법으로 항체 패널의 다양성을 확인하고 가장 유망한 후보를 선별하는 데 도움을 줍니다.&lt;/p&gt;



</description>
      <category>python</category>
      <category>visualization</category>
      <category>epitopebinning</category>
    </item>
    <item>
      <title>AKTA chromatogram visualization</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Fri, 14 Feb 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ehottl/akta-chromatogram-visualization-485a</link>
      <guid>https://dev.to/ehottl/akta-chromatogram-visualization-485a</guid>
      <description>&lt;p&gt;AKTA 시스템은 단백질 정제 과정에서 널리 사용되는 크로마토그래피 시스템입니다. 과거에는 GE healthcare에서 현재는 Cytiva사에서 판매되고 있으며 자체 소프트웨어 UNICORN를 제공합니다. 여기서는 UNICORN에서 추출한 스프레드 시트 데이터를 사용해 크로마토그래피 그래프를 생성하는 방법을 알아보겠습니다. 데이터 처리부터 시각화까지 단계별로 파이썬을 사용하였습니다.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
from typing import List, Tuple
import pandas as pd
import matplotlib.pyplot as plt

def read_and_preprocess_data(file_path: str) -&amp;gt; pd.DataFrame:
    """엑셀 파일을 읽고 전처리하는 함수"""
    # 엑셀 파일 읽기 (상위 2행 스킵)
    df = pd.read_excel(file_path, index_col=False, skiprows=[0, 1])

    # 컬럼 이름 변경
    new_columns = ['ml', 'mAU', 'ml_1', 'mS_cm', 'ml_2', 'percent', 'ml_3', '%B',
                   'ml_4', 'pH', 'ml_5', 'MPa', 'ml_6', 'ml_min', 'ml_7',
                   'temperature_C', 'ml_8', 'Frac', 'ml_9', 'Injections',
                   'ml_10', 'Set_Marks']
    df.columns = new_columns

    # %B 값 수정
    elution_start = df[df['Set_Marks'] == "Block Isocratic_Elution"]["ml_10"].values[0]
    df['%B'] = df['ml'].apply(lambda x: 100 if x &amp;gt;= elution_start else 0)

    return df

def setup_plot() -&amp;gt; Tuple[plt.Figure, plt.Axes]:
    """플롯 초기 설정"""
    fig, ax_main = plt.subplots(figsize=(10, 6))
    plt.subplots_adjust(top=0.85)
    return fig, ax_main

def plot_mau(ax: plt.Axes, df: pd.DataFrame) -&amp;gt; None:
    """mAU 데이터 플로팅"""
    ax.plot(df['ml'], df['mAU'], color='blue', label='mAU')
    ax.fill_between(df['ml'], df['mAU'], color='lightblue', alpha=0.3)
    ax.set_xlabel('ml')
    ax.set_ylabel('mAU', color='blue')
    ax.set_ylim(0, 2500)
    ax.tick_params(axis='y', labelcolor='blue')

def plot_b_percentage(ax: plt.Axes, df: pd.DataFrame) -&amp;gt; plt.Axes:
    """%B 데이터 플로팅"""
    ax_b = ax.twinx()
    ax_b.plot(df['ml'], df['%B'], color='red', label='%B')
    ax_b.set_ylabel('%B', color='red')
    ax_b.tick_params(axis='y', labelcolor='red')
    return ax_b

def plot_ph(ax: plt.Axes, df: pd.DataFrame) -&amp;gt; plt.Axes:
    """pH 데이터 플로팅"""
    ax_ph = ax.twinx()
    ax_ph.plot(df['ml_4'], df['pH'], color='green', label='pH')
    ax_ph.set_ylabel('pH', color='green')
    ax_ph.set_ylim(0, 12)
    ax_ph.tick_params(axis='y', labelcolor='green')
    ax_ph.spines['right'].set_position(('outward', 60))
    return ax_ph

def add_fraction_lines(ax: plt.Axes, df: pd.DataFrame) -&amp;gt; None:
    """분획 정보 표시"""
    for _, row in df.iterrows():
        if pd.notna(row['Frac']):
            ax.axvline(x=row['ml_8'], color='gray', linestyle='--', alpha=0.5)
            ax.text(row['ml_8'], ax.get_ylim()[1], row['Frac'], rotation=90, va='top', ha='right')

def add_combined_legend(axes: List[plt.Axes]) -&amp;gt; None:
    """모든 축의 범례 통합"""
    lines, labels = [], []
    for ax in axes:
        ax_lines, ax_labels = ax.get_legend_handles_labels()
        lines.extend(ax_lines)
        labels.extend(ax_labels)
    axes[0].legend(lines, labels, loc='upper left')

def process_and_plot_file(file_path: str) -&amp;gt; None:
    """단일 파일 처리 및 플로팅"""
    print(f'Processing file: {os.path.basename(file_path)}')

    df = read_and_preprocess_data(file_path)
    fig, ax_main = setup_plot()

    plot_mau(ax_main, df)
    ax_b = plot_b_percentage(ax_main, df)
    ax_ph = plot_ph(ax_main, df)
    add_fraction_lines(ax_main, df)
    add_combined_legend([ax_main, ax_b, ax_ph])

    plt.title(f'{os.path.basename(file_path)} Plot', y=1.01)
    ax_main.grid(True, linestyle='--', alpha=0.7)
    plt.tight_layout()
    plt.show()

def process_folder(folder_path: str) -&amp;gt; None:
    """폴더 내 모든 .xls 파일 처리"""
    for filename in os.listdir(folder_path):
        if filename.endswith('.xls'):
            file_path = os.path.join(folder_path, filename)
            process_and_plot_file(file_path)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
&lt;span&gt;1&lt;/span&gt; 데이터 살펴보기&lt;/h1&gt;

&lt;p&gt;UNICORN 소프트웨어에서 실험 데이터를 엑셀 파일로 저장할 수 있고 일반적으로 다음과 같은 구조를 가집니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;헤더 정보: 실험 조건, 날짜, 시간 등&lt;/li&gt;
&lt;li&gt;컬럼 헤더: 각 데이터 열의 이름과 단위&lt;/li&gt;
&lt;li&gt;데이터 행: 시간에 따른 다양한 측정값&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;데이터 테이블을 출력해 확인해 봅니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb2-1"&gt;&lt;/span&gt;
&lt;span id="cb2-2"&gt;df &lt;span&gt;=&lt;/span&gt; pd.read_excel(&lt;/span&gt;
&lt;span id="cb2-3"&gt; &lt;span&gt;"../data/input/AKTA_run_1.xls"&lt;/span&gt;, index_col&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb2-4"&gt; skiprows&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb2-5"&gt;df.head()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;ml&lt;/th&gt;
&lt;th&gt;mAU&lt;/th&gt;
&lt;th&gt;ml.1&lt;/th&gt;
&lt;th&gt;mS/cm&lt;/th&gt;
&lt;th&gt;ml.2&lt;/th&gt;
&lt;th&gt;%&lt;/th&gt;
&lt;th&gt;ml.3&lt;/th&gt;
&lt;th&gt;%B&lt;/th&gt;
&lt;th&gt;ml.4&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;...&lt;/th&gt;
&lt;th&gt;ml.6&lt;/th&gt;
&lt;th&gt;ml/min&lt;/th&gt;
&lt;th&gt;ml.7&lt;/th&gt;
&lt;th&gt;°C&lt;/th&gt;
&lt;th&gt;ml.8&lt;/th&gt;
&lt;th&gt;(Fractions)&lt;/th&gt;
&lt;th&gt;ml.9&lt;/th&gt;
&lt;th&gt;(Injections)&lt;/th&gt;
&lt;th&gt;ml.10&lt;/th&gt;
&lt;th&gt;(Set Marks)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;-0.099&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;14.197&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;6.86&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;14.98&lt;/td&gt;
&lt;td&gt;0.000000&lt;/td&gt;
&lt;td&gt;26.3&lt;/td&gt;
&lt;td&gt;-0.3&lt;/td&gt;
&lt;td&gt;A1&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;3.0&lt;/td&gt;
&lt;td&gt;-12.47&lt;/td&gt;
&lt;td&gt;Method Run 11/21/2024, 2:05:53 PM Korea Standa...&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.095014&lt;/td&gt;
&lt;td&gt;-0.091&lt;/td&gt;
&lt;td&gt;0.095257&lt;/td&gt;
&lt;td&gt;14.198&lt;/td&gt;
&lt;td&gt;0.190336&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;0.190336&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.095257&lt;/td&gt;
&lt;td&gt;6.86&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;0.095257&lt;/td&gt;
&lt;td&gt;14.50&lt;/td&gt;
&lt;td&gt;0.381111&lt;/td&gt;
&lt;td&gt;26.3&lt;/td&gt;
&lt;td&gt;9.7&lt;/td&gt;
&lt;td&gt;A2&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;-12.47&lt;/td&gt;
&lt;td&gt;Batch ID: 14DB55F2-E6A8-4734-8806-041C9AA49E11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0.190028&lt;/td&gt;
&lt;td&gt;-0.081&lt;/td&gt;
&lt;td&gt;0.190514&lt;/td&gt;
&lt;td&gt;14.199&lt;/td&gt;
&lt;td&gt;0.380673&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;0.380673&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.190514&lt;/td&gt;
&lt;td&gt;6.87&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;0.190514&lt;/td&gt;
&lt;td&gt;13.54&lt;/td&gt;
&lt;td&gt;0.762222&lt;/td&gt;
&lt;td&gt;26.3&lt;/td&gt;
&lt;td&gt;19.7&lt;/td&gt;
&lt;td&gt;A3&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;-12.47&lt;/td&gt;
&lt;td&gt;Base CV, 0.40 {ml}&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;0.285042&lt;/td&gt;
&lt;td&gt;-0.074&lt;/td&gt;
&lt;td&gt;0.285772&lt;/td&gt;
&lt;td&gt;14.203&lt;/td&gt;
&lt;td&gt;0.571009&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;0.571009&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.285772&lt;/td&gt;
&lt;td&gt;6.87&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;0.285772&lt;/td&gt;
&lt;td&gt;8.78&lt;/td&gt;
&lt;td&gt;1.143333&lt;/td&gt;
&lt;td&gt;26.2&lt;/td&gt;
&lt;td&gt;29.7&lt;/td&gt;
&lt;td&gt;A4&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;-12.47&lt;/td&gt;
&lt;td&gt;Block Start_with_PumpWash_Purifier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0.380056&lt;/td&gt;
&lt;td&gt;-0.067&lt;/td&gt;
&lt;td&gt;0.381029&lt;/td&gt;
&lt;td&gt;14.204&lt;/td&gt;
&lt;td&gt;0.761346&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;0.761346&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.381029&lt;/td&gt;
&lt;td&gt;6.87&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;0.381029&lt;/td&gt;
&lt;td&gt;8.00&lt;/td&gt;
&lt;td&gt;1.524444&lt;/td&gt;
&lt;td&gt;26.2&lt;/td&gt;
&lt;td&gt;39.7&lt;/td&gt;
&lt;td&gt;A5&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;-12.47&lt;/td&gt;
&lt;td&gt;Base SameAsMain&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;5 rows × 22 columns&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;1.1&lt;/span&gt; 주요 데이터 열 확인하기&lt;/h2&gt;

&lt;p&gt;파일에서 흔히 볼 수 있는 주요 데이터 열은 다음과 같습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ml (Volume): 용출 부피&lt;/li&gt;
&lt;li&gt;mAU (UV Absorbance): 자외선 흡광도&lt;/li&gt;
&lt;li&gt;mS/cm (Conductivity): 전도도&lt;/li&gt;
&lt;li&gt;%B (Buffer B Concentration): 버퍼 B의 농도 비율&lt;/li&gt;
&lt;li&gt;pH: pH 값&lt;/li&gt;
&lt;li&gt;MPa (Pressure): 시스템 압력&lt;/li&gt;
&lt;li&gt;°C (Temperature): 온도&lt;/li&gt;
&lt;li&gt;Fractions: 분획 번호&lt;/li&gt;
&lt;/ul&gt;



&lt;h1&gt;
&lt;span&gt;2&lt;/span&gt; 시각화 하기&lt;/h1&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb3-1"&gt;&lt;span&gt;# 여기에 폴더 경로를 입력하세요.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-2"&gt;folder_path &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"../data/input"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-3"&gt;process_folder(folder_path)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;Processing file: AKTA_run_2.xls&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_AKTA_plot_files/figure-html/cell-4-output-2.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_AKTA_plot_files%2Ffigure-html%2Fcell-4-output-2.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;pre&gt;&lt;code&gt;Processing file: AKTA_run_3.xls&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_AKTA_plot_files/figure-html/cell-4-output-4.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_AKTA_plot_files%2Ffigure-html%2Fcell-4-output-4.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;pre&gt;&lt;code&gt;Processing file: AKTA_run_1.xls&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_AKTA_plot_files/figure-html/cell-4-output-6.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_AKTA_plot_files%2Ffigure-html%2Fcell-4-output-6.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;pre&gt;&lt;code&gt;Processing file: AKTA_run_4.xls&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="python_AKTA_plot_files/figure-html/cell-4-output-8.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Fpython_AKTA_plot_files%2Ffigure-html%2Fcell-4-output-8.png" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;br&gt;


&lt;p&gt;생성된 그래프를 통해 다음과 같은 정보를 얻을 수 있습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;mAU 값의 변화: 단백질 농도 변화 추적&lt;/li&gt;
&lt;li&gt;%B 값의 변화: 버퍼 농도 변화 확인&lt;/li&gt;
&lt;li&gt;pH 변화: 용출 조건 모니터링&lt;/li&gt;
&lt;li&gt;분획 정보: 각 분획의 위치 확인&lt;/li&gt;
&lt;/ul&gt;



&lt;h1&gt;
&lt;span&gt;3&lt;/span&gt; 결론&lt;/h1&gt;

&lt;p&gt;이 방법을 통해 AKTA 데이터를 효과적으로 시각화할 수 있으며 이는 단백질 정제 과정의 최적화와 결과 해석에 큰 도움이 될 것입니다.&lt;/p&gt;



</description>
      <category>python</category>
      <category>visualization</category>
    </item>
    <item>
      <title>실습으로 배우는 대규모 언어 모델</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Sun, 26 Jan 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ehottl/silseubeuro-baeuneun-daegyumo-eoneo-model-18f2</link>
      <guid>https://dev.to/ehottl/silseubeuro-baeuneun-daegyumo-eoneo-model-18f2</guid>
      <description>&lt;p&gt;최근 출간된 책 Hands-On Large Language Models (Jay Alammar, Maarten Grootendorst 저, 2024)을 읽고 그 내용을 정리해보려고 합니다. 이 책은 급속도로 발전하고 있는 대규모 언어 모델(Large Language Models, LLMs)의 이론을 쉽게 풀어내며 동시에 실습을 통해 독자들이 직접 경험할 수 있도록 구성했습니다. LLM의 전반적인 내용을 다루고 있어 AI와 자연어 처리에 관심 있는 분들에게 훌륭한 가이드가 될 것 같습니다. 이번 포스팅에서는 이 책의 내용 중 쓸만한 코드와 짧은 설명을 공유하려고 합니다.&lt;/p&gt;

&lt;h1&gt;
&lt;span&gt;1&lt;/span&gt; 대규모 언어 모델의 이해&lt;/h1&gt;

&lt;h2&gt;
&lt;span&gt;1.1&lt;/span&gt; Tokens and Embedding&lt;/h2&gt;

&lt;p&gt;토크나이저는 어떻게 텍스트를 자르는가? 3가지 중요 팩터: 1. 어휘 크기 (Vocabulary size) 2. 미등록 단어 (Out-of-vocabulary words) 처리 3. 언어의 특성 (Language characteristics)&lt;/p&gt;

&lt;p&gt;토크나이저 분류&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Word token: 공백이나 구두점을 기준으로 단어 단위로 분리&lt;/li&gt;
&lt;li&gt;Subword token: 자주 사용되는 단어는 그대로 두고, 드문 단어는 더 작은 단위로 분리 (예: WordPiece, BPE)&lt;/li&gt;
&lt;li&gt;Character token: 개별 문자 단위로 분리&lt;/li&gt;
&lt;li&gt;Byte token: 바이트 단위로 분리, 모든 언어와 특수 문자 처리 가능&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
&lt;span&gt;1.2&lt;/span&gt; Inside of LLM&lt;/h2&gt;

&lt;p&gt;LLM은 3개의 컴포넌트로 구성: - Tokenizer: 입력 텍스트를 토큰으로 변환 - Transformer: 토큰을 처리하고 문맥을 이해하는 핵심 아키텍처 - LM head: Transformer의 출력을 받아 다음 토큰을 예측하는 층&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;1.2.1&lt;/span&gt; 최근 트랜스포머 블록의 발전&lt;/h3&gt;

&lt;h4&gt;
&lt;span&gt;1.2.1.1&lt;/span&gt; RoPE&lt;/h4&gt;

&lt;p&gt;RoPE는 다음과 같은 특징을 가집니다: - 상대적 위치 정보 인코딩: RoPE는 토큰 간의 상대적 위치 관계를 직접적으로 모델링합니다. - 회전 행렬 사용: 위치 정보를 회전 행렬을 통해 인코딩하여 효율적으로 처리합니다. - 길이 외삽(extrapolation) 능력: 학습 시 사용된 시퀀스 길이보다 긴 시퀀스에 대해서도 잘 작동합니다. - 계산 효율성: 기존 위치 임베딩 방식에 비해 계산 효율성이 높습니다. - 성능 향상: 특히 장문 텍스트 처리에서 성능 향상을 보입니다.&lt;/p&gt;

&lt;p&gt;RoPE는 GPT-3, PaLM, LLaMA 등 최신 대규모 언어 모델에서 널리 사용되고 있으며, 특히 긴 문맥을 다루는 데 효과적입니다.&lt;/p&gt;


&lt;br&gt;
&lt;br&gt;


&lt;h1&gt;
&lt;span&gt;2&lt;/span&gt; 사전 학습된 LLM 사용&lt;/h1&gt;


&lt;h2&gt;
&lt;span&gt;2.1&lt;/span&gt; 텍스트 분류&lt;/h2&gt;
&lt;p&gt;자연어 처리에서 흔히 수행되는 작업 중 하나가 분류입니다. 이 작업의 목표는 입력된 텍스트에 레이블이나 클래스를 할당하도록 모델을 훈련시키는 것입니다. 텍스트 분류는 전 세계적으로 다양한 용도로 활용되고 있습니다. 감성 분석, 의도 파악, 개체 추출, 언어 감지 등이 그 예입니다.&lt;/p&gt;
&lt;p&gt;대표적 언어 모델과 생성형 언어 모델이 분류 작업에 미치는 영향은 실로 막대합니다. 이러한 모델들은 텍스트 분류의 정확도와 효율성을 크게 향상시켰으며 더 복잡하고 미묘한 분류 작업을 가능하게 만들었습니다. 특히 사전 학습된 대규모 언어 모델(LLM)의 등장으로 텍스트 분류 작업의 성능이 비약적으로 발전했습니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;2.1.1&lt;/span&gt; 텍스트 감정 분석 with Representation model&lt;/h3&gt;
&lt;p&gt;텍스트 데이터를 가져와 텍스트의 감정 분석을 “cardiffnlp/twitter-roberta-base-sentiment-latest” 모델을 사용해 진행합니다. 이 모델은 RoBERTa 아키텍처를 기반으로 하며, 특히 트위터 데이터로 미세 조정되어 소셜 미디어 텍스트의 감정 분석에 최적화되어 있습니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb1-1"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb1-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; pipeline&lt;/span&gt;
&lt;span id="cb1-3"&gt;&lt;span&gt;from&lt;/span&gt; sklearn.metrics &lt;span&gt;import&lt;/span&gt; classification_report&lt;/span&gt;
&lt;span id="cb1-4"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb1-5"&gt;&lt;span&gt;from&lt;/span&gt; tqdm &lt;span&gt;import&lt;/span&gt; tqdm&lt;/span&gt;
&lt;span id="cb1-6"&gt;&lt;span&gt;from&lt;/span&gt; transformers.pipelines.pt_utils &lt;span&gt;import&lt;/span&gt; KeyDataset&lt;/span&gt;
&lt;span id="cb1-7"&gt;&lt;/span&gt;
&lt;span id="cb1-8"&gt;&lt;span&gt;# 데이터셋 불러오기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-9"&gt;data &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"rotten_tomatoes"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-10"&gt;&lt;/span&gt;
&lt;span id="cb1-11"&gt;&lt;span&gt;# Hugging Face 모델&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-12"&gt;model_name: &lt;span&gt;str&lt;/span&gt; &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"cardiffnlp/twitter-roberta-base-sentiment-latest"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-13"&gt;&lt;/span&gt;
&lt;span id="cb1-14"&gt;&lt;span&gt;# 모델을 파이프라인에 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-15"&gt;pipe &lt;span&gt;=&lt;/span&gt; pipeline(model&lt;span&gt;=&lt;/span&gt;model_name, tokenizer&lt;span&gt;=&lt;/span&gt;model_name, top_k&lt;span&gt;=&lt;/span&gt;&lt;span&gt;None&lt;/span&gt;, device&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cuda:0"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-16"&gt;&lt;/span&gt;
&lt;span id="cb1-17"&gt;&lt;span&gt;# 추론 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-18"&gt;y_pred: &lt;span&gt;list&lt;/span&gt;[&lt;span&gt;int&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; []&lt;/span&gt;
&lt;span id="cb1-19"&gt;&lt;span&gt;for&lt;/span&gt; output &lt;span&gt;in&lt;/span&gt; tqdm(pipe(KeyDataset(data[&lt;span&gt;"test"&lt;/span&gt;], &lt;span&gt;"text"&lt;/span&gt;)), total&lt;span&gt;=&lt;/span&gt;&lt;span&gt;len&lt;/span&gt;(data[&lt;span&gt;"test"&lt;/span&gt;])):&lt;/span&gt;
&lt;span id="cb1-20"&gt; negative_score &lt;span&gt;=&lt;/span&gt; output[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"score"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb1-21"&gt; positive_score &lt;span&gt;=&lt;/span&gt; output[&lt;span&gt;2&lt;/span&gt;][&lt;span&gt;"score"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb1-22"&gt; assignment &lt;span&gt;=&lt;/span&gt; np.argmax([negative_score, positive_score])&lt;/span&gt;
&lt;span id="cb1-23"&gt; y_pred.append(assignment)&lt;/span&gt;
&lt;span id="cb1-24"&gt;&lt;/span&gt;
&lt;span id="cb1-25"&gt;&lt;/span&gt;
&lt;span id="cb1-26"&gt;&lt;span&gt;def&lt;/span&gt; evaluate_performance(y_true: &lt;span&gt;list&lt;/span&gt;[&lt;span&gt;int&lt;/span&gt;], y_pred: &lt;span&gt;list&lt;/span&gt;[&lt;span&gt;int&lt;/span&gt;]) &lt;span&gt;-&amp;gt;&lt;/span&gt; &lt;span&gt;None&lt;/span&gt;:&lt;/span&gt;
&lt;span id="cb1-27"&gt; &lt;span&gt;"""분류 보고서 생성 및 출력"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-28"&gt; performance: &lt;span&gt;str&lt;/span&gt; &lt;span&gt;=&lt;/span&gt; classification_report(&lt;/span&gt;
&lt;span id="cb1-29"&gt; y_true, y_pred, target_names&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;"부정적"&lt;/span&gt;, &lt;span&gt;"긍정적"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb1-30"&gt; )&lt;/span&gt;
&lt;span id="cb1-31"&gt; &lt;span&gt;print&lt;/span&gt;(performance)&lt;/span&gt;
&lt;span id="cb1-32"&gt;&lt;/span&gt;
&lt;span id="cb1-33"&gt;&lt;/span&gt;
&lt;span id="cb1-34"&gt;&lt;span&gt;# 성능 평가 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-35"&gt;evaluate_performance(data[&lt;span&gt;"test"&lt;/span&gt;][&lt;span&gt;"label"&lt;/span&gt;], y_pred)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;100%|█████████████████████████| 1066/1066 [00:02&amp;lt;00:00, 459.29it/s]&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code&gt; precision recall f1-score support

         부정적 0.50 1.00 0.67 533
         긍정적 0.00 0.00 0.00 533

    accuracy 0.50 1066
   macro avg 0.25 0.50 0.33 1066
weighted avg 0.25 0.50 0.33 1066
&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;





&lt;h3&gt;
&lt;span&gt;2.1.2&lt;/span&gt; 텍스트 감정 분석 with Generative model&lt;/h3&gt;
&lt;p&gt;생성형 모델을 사용한 텍스트 감정 분석은 기존의 분류 기반 접근 방식과는 다른 새로운 패러다임을 제시합니다. 이 방식은 기존 방법보다 더 정확하고 세밀한 결과를 제공할 수 있지만, 모델의 훈련과 계산 비용이 높다는 단점도 있습니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb5-1"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb5-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; pipeline&lt;/span&gt;
&lt;span id="cb5-3"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb5-4"&gt;&lt;span&gt;from&lt;/span&gt; tqdm &lt;span&gt;import&lt;/span&gt; tqdm&lt;/span&gt;
&lt;span id="cb5-5"&gt;&lt;span&gt;from&lt;/span&gt; transformers.pipelines.pt_utils &lt;span&gt;import&lt;/span&gt; KeyDataset&lt;/span&gt;
&lt;span id="cb5-6"&gt;&lt;/span&gt;
&lt;span id="cb5-7"&gt;&lt;span&gt;# 영화 리뷰 데이터셋 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-8"&gt;data &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"rotten_tomatoes"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb5-9"&gt;&lt;/span&gt;
&lt;span id="cb5-10"&gt;&lt;span&gt;# Hugging Face 모델&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-11"&gt;model_name: &lt;span&gt;str&lt;/span&gt; &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"google/flan-t5-small"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-12"&gt;&lt;/span&gt;
&lt;span id="cb5-13"&gt;&lt;span&gt;# 모델을 파이프라인에 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-14"&gt;pipe &lt;span&gt;=&lt;/span&gt; pipeline(&lt;span&gt;"text2text-generation"&lt;/span&gt;, model&lt;span&gt;=&lt;/span&gt;model_name, device&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cuda:0"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb5-15"&gt;&lt;/span&gt;
&lt;span id="cb5-16"&gt;&lt;span&gt;# 데이터 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-17"&gt;prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"Is the following sentence positive or negative? "&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-18"&gt;data &lt;span&gt;=&lt;/span&gt; data.&lt;span&gt;map&lt;/span&gt;(&lt;span&gt;lambda&lt;/span&gt; example: {&lt;span&gt;"t5"&lt;/span&gt;: prompt &lt;span&gt;+&lt;/span&gt; example[&lt;span&gt;"text"&lt;/span&gt;]})&lt;/span&gt;
&lt;span id="cb5-19"&gt;&lt;/span&gt;
&lt;span id="cb5-20"&gt;&lt;span&gt;# 추론 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-21"&gt;y_pred &lt;span&gt;=&lt;/span&gt; []&lt;/span&gt;
&lt;span id="cb5-22"&gt;&lt;span&gt;for&lt;/span&gt; output &lt;span&gt;in&lt;/span&gt; tqdm(pipe(KeyDataset(data[&lt;span&gt;"test"&lt;/span&gt;], &lt;span&gt;"t5"&lt;/span&gt;)), total&lt;span&gt;=&lt;/span&gt;&lt;span&gt;len&lt;/span&gt;(data[&lt;span&gt;"test"&lt;/span&gt;])):&lt;/span&gt;
&lt;span id="cb5-23"&gt; text &lt;span&gt;=&lt;/span&gt; output[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb5-24"&gt; y_pred.append(&lt;span&gt;0&lt;/span&gt; &lt;span&gt;if&lt;/span&gt; text &lt;span&gt;==&lt;/span&gt; &lt;span&gt;"negative"&lt;/span&gt; &lt;span&gt;else&lt;/span&gt; &lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb5-25"&gt;&lt;/span&gt;
&lt;span id="cb5-26"&gt;&lt;/span&gt;
&lt;span id="cb5-27"&gt;&lt;span&gt;def&lt;/span&gt; evaluate_performance(y_true: &lt;span&gt;list&lt;/span&gt;[&lt;span&gt;int&lt;/span&gt;], y_pred: &lt;span&gt;list&lt;/span&gt;[&lt;span&gt;int&lt;/span&gt;]) &lt;span&gt;-&amp;gt;&lt;/span&gt; &lt;span&gt;None&lt;/span&gt;:&lt;/span&gt;
&lt;span id="cb5-28"&gt; &lt;span&gt;"""분류 보고서 생성 및 출력"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-29"&gt; performance: &lt;span&gt;str&lt;/span&gt; &lt;span&gt;=&lt;/span&gt; classification_report(&lt;/span&gt;
&lt;span id="cb5-30"&gt; y_true, y_pred, target_names&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;"부정적"&lt;/span&gt;, &lt;span&gt;"긍정적"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb5-31"&gt; )&lt;/span&gt;
&lt;span id="cb5-32"&gt; &lt;span&gt;print&lt;/span&gt;(performance)&lt;/span&gt;
&lt;span id="cb5-33"&gt;&lt;/span&gt;
&lt;span id="cb5-34"&gt;&lt;/span&gt;
&lt;span id="cb5-35"&gt;&lt;span&gt;# 성능 평가 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-36"&gt;evaluate_performance(data[&lt;span&gt;"test"&lt;/span&gt;][&lt;span&gt;"label"&lt;/span&gt;], y_pred)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;100%|█████████████████████████| 1066/1066 [00:08&amp;lt;00:00, 121.39it/s]&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code&gt; precision recall f1-score support

         부정적 0.83 0.85 0.84 533
         긍정적 0.85 0.83 0.84 533

    accuracy 0.84 1066
   macro avg 0.84 0.84 0.84 1066
weighted avg 0.84 0.84 0.84 1066
&lt;/code&gt;&lt;/pre&gt;


&lt;pre&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;







&lt;h2&gt;
&lt;span&gt;2.2&lt;/span&gt; 텍스트 클러스터링과 토픽 클러스터링&lt;/h2&gt;
&lt;p&gt;텍스트 클러스터링과 토픽 모델링은 문서 컬렉션을 분석하는 두 가지 주요 접근 방식입니다. 텍스트 클러스터링은 유사한 문서들을 그룹화하여 컬렉션을 여러 클러스터로 나누는 것을 목표로 합니다. 일반적으로 각 문서는 하나의 클러스터에만 속하게 됩니다. 반면 토픽 모델링은 문서 컬렉션에 내재된 추상적인 ’토픽’들을 발견하는 것을 목표로 합니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;2.2.1&lt;/span&gt; BERTopic: 모듈식 토픽 모델링 프레임워크&lt;/h3&gt;
&lt;p&gt;BERTopic은 최신 자연어 처리 기술을 활용한 강력한 토픽 모델링 프레임워크입니다. BERTopic은 전통적인 토픽 모델링 기법인 LDA에 비해 더 정교한 결과를 제공할 수 있으며, 특히 짧은 텍스트나 특정 도메인의 텍스트에 대해 우수한 성능을 보입니다. 학술 연구, 소셜 미디어 분석, 고객 피드백 분석 등 다양한 분야에서 활용될 수 있으며, 대규모 문서 컬렉션에서 의미 있는 인사이트를 추출하는 데 유용합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb9-1"&gt;&lt;span&gt;import&lt;/span&gt; umap&lt;/span&gt;
&lt;span id="cb9-2"&gt;&lt;span&gt;import&lt;/span&gt; pandas &lt;span&gt;as&lt;/span&gt; pd&lt;/span&gt;
&lt;span id="cb9-3"&gt;&lt;span&gt;from&lt;/span&gt; hdbscan &lt;span&gt;import&lt;/span&gt; HDBSCAN&lt;/span&gt;
&lt;span id="cb9-4"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb9-5"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers &lt;span&gt;import&lt;/span&gt; SentenceTransformer&lt;/span&gt;
&lt;span id="cb9-6"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb9-7"&gt;&lt;/span&gt;
&lt;span id="cb9-8"&gt;&lt;span&gt;# huggingface에서 데이터 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-9"&gt;dataset &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"effectiveML/ArXiv-10"&lt;/span&gt;)[&lt;span&gt;"train"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb9-10"&gt;&lt;/span&gt;
&lt;span id="cb9-11"&gt;&lt;span&gt;# 메타데이터 추출&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-12"&gt;abstracts &lt;span&gt;=&lt;/span&gt; dataset[&lt;span&gt;"abstract"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb9-13"&gt;titles &lt;span&gt;=&lt;/span&gt; dataset[&lt;span&gt;"title"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb9-14"&gt;&lt;/span&gt;
&lt;span id="cb9-15"&gt;&lt;span&gt;# 1단계&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-16"&gt;&lt;span&gt;# 각 초록에 대한 임베딩 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-17"&gt;embedding_model &lt;span&gt;=&lt;/span&gt; SentenceTransformer(&lt;span&gt;"thenlper/gte-small"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb9-18"&gt;embeddings &lt;span&gt;=&lt;/span&gt; embedding_model.encode(abstracts, show_progress_bar&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb9-19"&gt;&lt;/span&gt;
&lt;span id="cb9-20"&gt;&lt;span&gt;# 2단계&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-21"&gt;&lt;span&gt;# 384차원의 입력 임베딩을 50차원으로 축소&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-22"&gt;umap_model &lt;span&gt;=&lt;/span&gt; umap.UMAP(n_components&lt;span&gt;=&lt;/span&gt;&lt;span&gt;50&lt;/span&gt;, min_dist&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.0&lt;/span&gt;, metric&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;, random_state&lt;span&gt;=&lt;/span&gt;&lt;span&gt;42&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb9-23"&gt;reduced_embeddings &lt;span&gt;=&lt;/span&gt; umap_model.fit_transform(embeddings)&lt;/span&gt;
&lt;span id="cb9-24"&gt;&lt;/span&gt;
&lt;span id="cb9-25"&gt;&lt;span&gt;# 3단계&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-26"&gt;&lt;span&gt;# 모델을 학습하고 클러스터 추출&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-27"&gt;hdbscan_model &lt;span&gt;=&lt;/span&gt; HDBSCAN(&lt;/span&gt;
&lt;span id="cb9-28"&gt; min_cluster_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;50&lt;/span&gt;, metric&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"euclidean"&lt;/span&gt;, cluster_selection_method&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"eom"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-29"&gt;).fit(reduced_embeddings)&lt;/span&gt;
&lt;span id="cb9-30"&gt;clusters &lt;span&gt;=&lt;/span&gt; hdbscan_model.labels_&lt;/span&gt;
&lt;span id="cb9-31"&gt;&lt;/span&gt;
&lt;span id="cb9-32"&gt;&lt;span&gt;# 시각화를 위한 준비: 384차원 임베딩을 2차원으로 축소&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-33"&gt;reduced_embeddings &lt;span&gt;=&lt;/span&gt; umap.UMAP(&lt;/span&gt;
&lt;span id="cb9-34"&gt; n_components&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2&lt;/span&gt;, min_dist&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.0&lt;/span&gt;, metric&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;, random_state&lt;span&gt;=&lt;/span&gt;&lt;span&gt;42&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-35"&gt;).fit_transform(embeddings)&lt;/span&gt;
&lt;span id="cb9-36"&gt;&lt;/span&gt;
&lt;span id="cb9-37"&gt;&lt;span&gt;# 데이터프레임 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-38"&gt;df &lt;span&gt;=&lt;/span&gt; pd.DataFrame(reduced_embeddings, columns&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;"x"&lt;/span&gt;, &lt;span&gt;"y"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb9-39"&gt;df[&lt;span&gt;"title"&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; titles&lt;/span&gt;
&lt;span id="cb9-40"&gt;df[&lt;span&gt;"cluster"&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; [&lt;span&gt;str&lt;/span&gt;(c) &lt;span&gt;for&lt;/span&gt; c &lt;span&gt;in&lt;/span&gt; clusters]&lt;/span&gt;
&lt;span id="cb9-41"&gt;&lt;/span&gt;
&lt;span id="cb9-42"&gt;&lt;span&gt;# 이상치와 비이상치(클러스터) 선택&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-43"&gt;clusters_df &lt;span&gt;=&lt;/span&gt; df.loc[df.cluster &lt;span&gt;!=&lt;/span&gt; &lt;span&gt;"-1"&lt;/span&gt;, :]&lt;/span&gt;
&lt;span id="cb9-44"&gt;outliers_df &lt;span&gt;=&lt;/span&gt; df.loc[df.cluster &lt;span&gt;==&lt;/span&gt; &lt;span&gt;"-1"&lt;/span&gt;, :]&lt;/span&gt;
&lt;span id="cb9-45"&gt;&lt;/span&gt;
&lt;span id="cb9-46"&gt;&lt;/span&gt;
&lt;span id="cb9-47"&gt;&lt;span&gt;# 플랏 크기 지정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb9-48"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;6&lt;/span&gt;, &lt;span&gt;6&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb9-49"&gt;plt.scatter(&lt;/span&gt;
&lt;span id="cb9-50"&gt; outliers_df.x,&lt;/span&gt;
&lt;span id="cb9-51"&gt; outliers_df.y,&lt;/span&gt;
&lt;span id="cb9-52"&gt; alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-53"&gt; s&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-54"&gt; c&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"grey"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-55"&gt;)&lt;/span&gt;
&lt;span id="cb9-56"&gt;plt.scatter(&lt;/span&gt;
&lt;span id="cb9-57"&gt; clusters_df.x,&lt;/span&gt;
&lt;span id="cb9-58"&gt; clusters_df.y,&lt;/span&gt;
&lt;span id="cb9-59"&gt; c&lt;span&gt;=&lt;/span&gt;clusters_df.cluster.astype(&lt;span&gt;int&lt;/span&gt;),&lt;/span&gt;
&lt;span id="cb9-60"&gt; alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.15&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-61"&gt; s&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-62"&gt; cmap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"viridis_r"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb9-63"&gt;)&lt;/span&gt;
&lt;span id="cb9-64"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb9-65"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


{"model_id":"8aa485e3e76f4041a1670e36d388227a","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}






&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-7-output-2.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-7-output-2.png" width="515" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb10-1"&gt;&lt;span&gt;from&lt;/span&gt; bertopic &lt;span&gt;import&lt;/span&gt; BERTopic&lt;/span&gt;
&lt;span id="cb10-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; pipeline&lt;/span&gt;
&lt;span id="cb10-3"&gt;&lt;span&gt;from&lt;/span&gt; bertopic.representation &lt;span&gt;import&lt;/span&gt; TextGeneration&lt;/span&gt;
&lt;span id="cb10-4"&gt;&lt;span&gt;from&lt;/span&gt; copy &lt;span&gt;import&lt;/span&gt; deepcopy&lt;/span&gt;
&lt;span id="cb10-5"&gt;&lt;span&gt;import&lt;/span&gt; pandas &lt;span&gt;as&lt;/span&gt; pd&lt;/span&gt;
&lt;span id="cb10-6"&gt;&lt;/span&gt;
&lt;span id="cb10-7"&gt;&lt;span&gt;# BERTopic 모델 훈련&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-8"&gt;topic_model &lt;span&gt;=&lt;/span&gt; BERTopic(&lt;/span&gt;
&lt;span id="cb10-9"&gt; embedding_model&lt;span&gt;=&lt;/span&gt;embedding_model,&lt;/span&gt;
&lt;span id="cb10-10"&gt; umap_model&lt;span&gt;=&lt;/span&gt;umap_model,&lt;/span&gt;
&lt;span id="cb10-11"&gt; hdbscan_model&lt;span&gt;=&lt;/span&gt;hdbscan_model,&lt;/span&gt;
&lt;span id="cb10-12"&gt; verbose&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb10-13"&gt;).fit(abstracts, embeddings)&lt;/span&gt;
&lt;span id="cb10-14"&gt;&lt;/span&gt;
&lt;span id="cb10-15"&gt;&lt;span&gt;# 원본 표현 저장&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-16"&gt;original_topics &lt;span&gt;=&lt;/span&gt; deepcopy(topic_model.topic_representations_)&lt;/span&gt;
&lt;span id="cb10-17"&gt;&lt;/span&gt;
&lt;span id="cb10-18"&gt;&lt;span&gt;# Flan-T5를 사용한 토픽 표현 업데이트&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-19"&gt;generator &lt;span&gt;=&lt;/span&gt; pipeline(&lt;span&gt;"text2text-generation"&lt;/span&gt;, model&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"google/flan-t5-small"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb10-20"&gt;&lt;/span&gt;
&lt;span id="cb10-21"&gt;&lt;span&gt;# 프롬프트 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-22"&gt;prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"""I have a topic that contains the following documents:&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-23"&gt;&lt;span&gt;[DOCUMENTS]&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-24"&gt;&lt;span&gt;The topic is described by the following keywords: '[KEYWORDS]'.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-25"&gt;&lt;span&gt;Based on the documents and keywords, what is this topic about?"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-26"&gt;&lt;/span&gt;
&lt;span id="cb10-27"&gt;representation_model &lt;span&gt;=&lt;/span&gt; TextGeneration(&lt;/span&gt;
&lt;span id="cb10-28"&gt; generator, prompt&lt;span&gt;=&lt;/span&gt;prompt, doc_length&lt;span&gt;=&lt;/span&gt;&lt;span&gt;50&lt;/span&gt;, tokenizer&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"whitespace"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-29"&gt;)&lt;/span&gt;
&lt;span id="cb10-30"&gt;topic_model.update_topics(abstracts, representation_model&lt;span&gt;=&lt;/span&gt;representation_model)&lt;/span&gt;
&lt;span id="cb10-31"&gt;&lt;/span&gt;
&lt;span id="cb10-32"&gt;&lt;/span&gt;
&lt;span id="cb10-33"&gt;&lt;span&gt;# 토픽 차이 표시 함수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-34"&gt;&lt;span&gt;def&lt;/span&gt; topic_differences(model, original_topics, nr_topics&lt;span&gt;=&lt;/span&gt;&lt;span&gt;5&lt;/span&gt;):&lt;/span&gt;
&lt;span id="cb10-35"&gt; &lt;span&gt;"""두 모델 간의 토픽 표현 차이를 보여줍니다"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-36"&gt; df &lt;span&gt;=&lt;/span&gt; pd.DataFrame(columns&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;"Topic"&lt;/span&gt;, &lt;span&gt;"Original"&lt;/span&gt;, &lt;span&gt;"Updated"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb10-37"&gt; &lt;span&gt;for&lt;/span&gt; topic &lt;span&gt;in&lt;/span&gt; &lt;span&gt;range&lt;/span&gt;(nr_topics):&lt;/span&gt;
&lt;span id="cb10-38"&gt; &lt;span&gt;# 모델별로 토픽당 상위 5개 단어 추출&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-39"&gt; og_words &lt;span&gt;=&lt;/span&gt; &lt;span&gt;" | "&lt;/span&gt;.join(&lt;span&gt;list&lt;/span&gt;(&lt;span&gt;zip&lt;/span&gt;(&lt;span&gt;*&lt;/span&gt;original_topics[topic]))[&lt;span&gt;0&lt;/span&gt;][:&lt;span&gt;5&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb10-40"&gt; new_words &lt;span&gt;=&lt;/span&gt; &lt;span&gt;" "&lt;/span&gt;.join(&lt;span&gt;list&lt;/span&gt;(&lt;span&gt;zip&lt;/span&gt;(&lt;span&gt;*&lt;/span&gt;model.get_topic(topic)))[&lt;span&gt;0&lt;/span&gt;][:&lt;span&gt;5&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb10-41"&gt; df.loc[&lt;span&gt;len&lt;/span&gt;(df)] &lt;span&gt;=&lt;/span&gt; [topic, og_words, new_words]&lt;/span&gt;
&lt;span id="cb10-42"&gt; &lt;span&gt;return&lt;/span&gt; df&lt;/span&gt;
&lt;span id="cb10-43"&gt;&lt;/span&gt;
&lt;span id="cb10-44"&gt;&lt;/span&gt;
&lt;span id="cb10-45"&gt;&lt;span&gt;# 토픽 차이 출력&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb10-46"&gt;&lt;span&gt;print&lt;/span&gt;(topic_differences(topic_model, original_topics))&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;2025-01-21 12:04:12,339 - BERTopic - Dimensionality - Fitting the dimensionality reduction algorithm
2025-01-21 12:05:33,792 - BERTopic - Dimensionality - Completed ✓
2025-01-21 12:05:33,799 - BERTopic - Cluster - Start clustering the reduced embeddings
2025-01-21 12:05:37,721 - BERTopic - Cluster - Completed ✓
2025-01-21 12:05:37,728 - BERTopic - Representation - Extracting topics from clusters using representation models.
2025-01-21 12:05:41,069 - BERTopic - Representation - Completed ✓
100%|████████████████████████████| 205/205 [00:03&amp;lt;00:00, 54.17it/s]&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; Topic Original \
0 0 mathbb | prove | mathcal | we | if   
1 1 flow | fluid | the | of | and   
2 2 channel | wireless | communication | mimo | pr...   
3 3 quantum | entanglement | states | bell | measu...   
4 4 solar | plasma | magnetic | coronal | reconnec...   

                    Updated  
0 Maths      
1 dynamics      
2 Networking      
3 Quantum entanglement      
4 Solar-energy &lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb13-1"&gt;fig &lt;span&gt;=&lt;/span&gt; topic_model.visualize_document_datamap(&lt;/span&gt;
&lt;span id="cb13-2"&gt; titles,&lt;/span&gt;
&lt;span id="cb13-3"&gt; title&lt;span&gt;=&lt;/span&gt;&lt;span&gt;""&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb13-4"&gt; topics&lt;span&gt;=&lt;/span&gt;&lt;span&gt;list&lt;/span&gt;(&lt;span&gt;range&lt;/span&gt;(&lt;span&gt;20&lt;/span&gt;)),&lt;/span&gt;
&lt;span id="cb13-5"&gt; reduced_embeddings&lt;span&gt;=&lt;/span&gt;reduced_embeddings,&lt;/span&gt;
&lt;span id="cb13-6"&gt; width&lt;span&gt;=&lt;/span&gt;&lt;span&gt;600&lt;/span&gt;, &lt;span&gt;# 7인치에 해당하는 픽셀 수 (100 픽셀/인치 기준)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb13-7"&gt; height&lt;span&gt;=&lt;/span&gt;&lt;span&gt;600&lt;/span&gt;, &lt;span&gt;# 7인치에 해당하는 픽셀 수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb13-8"&gt; label_font_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;11&lt;/span&gt;, &lt;span&gt;# 텍스트 크기 축소&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb13-9"&gt; label_wrap_width&lt;span&gt;=&lt;/span&gt;&lt;span&gt;15&lt;/span&gt;, &lt;span&gt;# 레이블 줄바꿈 너비 축소&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb13-10"&gt; use_medoids&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb13-11"&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-9-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-9-output-1.png" width="711" height="611"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
&lt;span&gt;2.3&lt;/span&gt; 프롬프트 엔지니어링&lt;/h2&gt;

&lt;p&gt;생성형 사전 학습 트랜스포머(GPT) 모델은 사용자가 제시한 프롬프트에 대응하여 텍스트를 생성하는 놀라운 능력을 갖추고 있습니다. 프롬프트 엔지니어링을 통해 이러한 프롬프트를 효과적으로 설계함으로써 생성되는 텍스트의 품질을 크게 향상시킬 수 있습니다.&lt;/p&gt;

&lt;p&gt;이번에는 이러한 생성형 모델에 대해 더 자세히 살펴보고, 프롬프트 엔지니어링의 세계로 깊이 들어가 보겠습니다. 또한 생성형 모델을 이용한 추론, 검증, 그리고 모델 출력의 평가 방법까지 다루어 볼 것입니다.&lt;/p&gt;

&lt;p&gt;프롬프트 엔지니어링은 단순히 질문을 던지는 것을 넘어서, 모델이 원하는 방식으로 응답하도록 유도하는 기술입니다. 이는 모델의 성능을 최적화하고, 특정 작업에 맞춤화된 결과를 얻는 데 핵심적인 역할을 합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb14-1"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb14-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb14-3"&gt; AutoModelForCausalLM,&lt;/span&gt;
&lt;span id="cb14-4"&gt; AutoTokenizer,&lt;/span&gt;
&lt;span id="cb14-5"&gt; pipeline,&lt;/span&gt;
&lt;span id="cb14-6"&gt; logging,&lt;/span&gt;
&lt;span id="cb14-7"&gt;)&lt;/span&gt;
&lt;span id="cb14-8"&gt;&lt;/span&gt;
&lt;span id="cb14-9"&gt;&lt;span&gt;# 사용할 모델 이름 지정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-10"&gt;model_name &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"microsoft/Phi-3.5-mini-instruct"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-11"&gt;&lt;/span&gt;
&lt;span id="cb14-12"&gt;&lt;span&gt;# 모델 로드 및 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-13"&gt;model &lt;span&gt;=&lt;/span&gt; AutoModelForCausalLM.from_pretrained(&lt;/span&gt;
&lt;span id="cb14-14"&gt; model_name,&lt;/span&gt;
&lt;span id="cb14-15"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cuda"&lt;/span&gt;, &lt;span&gt;# GPU 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-16"&gt; torch_dtype&lt;span&gt;=&lt;/span&gt;torch.float16, &lt;span&gt;# 16비트 부동소수점 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-17"&gt; trust_remote_code&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb14-18"&gt;)&lt;/span&gt;
&lt;span id="cb14-19"&gt;&lt;span&gt;# 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-20"&gt;tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(model_name)&lt;/span&gt;
&lt;span id="cb14-21"&gt;&lt;/span&gt;
&lt;span id="cb14-22"&gt;&lt;span&gt;# 텍스트 생성 파이프라인 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-23"&gt;pipe &lt;span&gt;=&lt;/span&gt; pipeline(&lt;/span&gt;
&lt;span id="cb14-24"&gt; &lt;span&gt;"text-generation"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb14-25"&gt; model&lt;span&gt;=&lt;/span&gt;model, &lt;span&gt;# 모델 지정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-26"&gt; tokenizer&lt;span&gt;=&lt;/span&gt;tokenizer, &lt;span&gt;# 토크나이저 지정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-27"&gt; return_full_text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;, &lt;span&gt;# 전체 텍스트 반환 안 함&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-28"&gt; max_new_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;500&lt;/span&gt;, &lt;span&gt;# 최대 새 토큰 수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-29"&gt; do_sample&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;, &lt;span&gt;# 샘플링 사용 안 함&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-30"&gt;)&lt;/span&gt;
&lt;span id="cb14-31"&gt;&lt;/span&gt;
&lt;span id="cb14-32"&gt;&lt;span&gt;# 프롬프트 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-33"&gt;messages &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb14-34"&gt; {&lt;/span&gt;
&lt;span id="cb14-35"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb14-36"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"직장동료에게 보내는 이메일의 짧은 인사말 3개만 적어줘."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb14-37"&gt; }&lt;/span&gt;
&lt;span id="cb14-38"&gt;]&lt;/span&gt;
&lt;span id="cb14-39"&gt;&lt;/span&gt;
&lt;span id="cb14-40"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-41"&gt;output &lt;span&gt;=&lt;/span&gt; pipe(messages)&lt;/span&gt;
&lt;span id="cb14-42"&gt;&lt;span&gt;# 생성된 텍스트 출력&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb14-43"&gt;&lt;span&gt;print&lt;/span&gt;(output[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;`flash-attention` package not found, consider installing for better performance: No module named 'flash_attn'.
Current `flash-attention` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`.&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;{"model_id":"0f8ef81a8c3a4e7d87938c67a78e69d9","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;You are not running the flash-attention implementation, expect numerical differences.&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 제목: 환영 인사드리기

안녕 친구,

안녕하세요! 이 이메일을 보내드리고 직장에 합류하게 되어 기쁩니다. 팀에서 함께 일하고 함께 성장하기를 기대합니다. 행운을 빌어요!

감사합니다,
[당신의 이름]&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.1&lt;/span&gt; 모델 출력 제어&lt;/h3&gt;

&lt;p&gt;모델 매개변수를 조정하여 원하는 종류의 출력을 제어할 수 있습니다. &lt;strong&gt;temperature&lt;/strong&gt;와 &lt;strong&gt;top_p&lt;/strong&gt; 매개변수는 출력의 무작위성을 제어합니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;2.3.1.1&lt;/span&gt; Temperature(온도)&lt;/h4&gt;

&lt;p&gt;Temperature는 생성된 텍스트의 무작위성 또는 창의성을 제어합니다. 이는 확률이 낮은 토큰을 선택할 가능성을 정의합니다. 기본 아이디어는 temperature가 0이면 항상 가장 가능성이 높은 단어를 선택하기 때문에 매번 동일한 응답을 생성한다는 것입니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;2.3.1.2&lt;/span&gt; top_p&lt;/h4&gt;

&lt;p&gt;top_p(핵 샘플링이라고도 함)는 LLM이 고려할 수 있는 토큰의 부분집합(핵)을 제어하는 샘플링 기법입니다. 누적 확률에 도달할 때까지 토큰을 고려합니다. top_p를 0.1로 설정하면 해당 값에 도달할 때까지 토큰을 고려합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb18-1"&gt;&lt;span&gt;# Using a high temperature&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb18-2"&gt;output &lt;span&gt;=&lt;/span&gt; pipe(messages, do_sample&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, temperature&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb18-3"&gt;&lt;span&gt;print&lt;/span&gt;(output[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 제목: 안녕하세요, [동료 이름]

안녕하세요! 저를 잘 기억해주시고, 전문적인 지원과 협력을 이어오시길 바랍니다.

감사드립니다!

[당신의 이름]&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb20-1"&gt;&lt;span&gt;# Using a high top_p&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb20-2"&gt;output &lt;span&gt;=&lt;/span&gt; pipe(messages, do_sample&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, top_p&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb20-3"&gt;&lt;span&gt;print&lt;/span&gt;(output[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 제목: 인사 릴렉센

---

1. 빠른 칭호와 행운을 바쳐:
   안녕하세요 [동료 이름],

   이 메시지를 전해드리며, 우리를 부러워하게 만드는 발신인 이 글을 통해 전하고자 합니다. 이 역할에서 네가 저녁부터 아침까지 우수하게 일하고 있다는 증거로 자리매김하는 것을 자랑스럽게 여깁니다.

2. 긍정적인 기여에 감사:
   네가 간략한 지원도 및 공유된 실력에 영향을 미친 프로젝트와 빈틈없는 팀플 작물에 큰 마스터피스를 제공해주셨습니다. 이 회사를 하나의 개인으로부터 더 강력하고 협력적인 집단으로 시간이 지나면서 지속적인 성장을 목격하고 있습니다.

3. 앞으로의 연결:
   이 인사의 마당에 더 나&lt;/code&gt;&lt;/pre&gt;



&lt;h3&gt;
&lt;span&gt;2.3.2&lt;/span&gt; 고급 프롬프트 엔지니어링&lt;/h3&gt;

&lt;p&gt;좋은 프롬프트를 만드는 것은 간단해 보일 수 있습니다. 구체적인 질문을 하고, 정확하게 표현하며, 몇 가지 예시를 추가하면 끝난 것 같죠! 하지만 프롬프트 작성은 매우 빠르게 복잡해질 수 있으며, 그 결과 대규모 언어 모델(LLM)을 활용하는 데 있어 종종 과소평가되는 요소입니다. 여기서는 프롬프트를 구축하기 위한 여러 가지 고급 기법을 살펴보겠습니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;2.3.3&lt;/span&gt; 복잡한 프롬프트&lt;/h3&gt;

&lt;p&gt;이 복잡한 프롬프트는 프롬프트 작성의 모듈식 특성을 보여줍니다. 우리는 자유롭게 구성 요소를 추가하거나 제거할 수 있고 출력에 미치는 영향을 판단할 수 있습니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb22-1"&gt;&lt;span&gt;# 프롬프트 구성 요소&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-2"&gt;persona &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"당신은 인공지능과 기계학습 분야의 전문가입니다. 복잡한 기술 문서를 쉽게 이해할 수 있는 요약으로 만드는 데 탁월합니다.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-3"&gt;instruction &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"제공된 기술 문서의 핵심 내용을 요약해주세요.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-4"&gt;context &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"요약은 개발자들이 문서의 가장 중요한 정보를 빠르게 파악할 수 있도록 핵심 포인트를 추출해야 합니다.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-5"&gt;data_format &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"주요 개념과 기술을 설명하는 글머리 기호 요약을 만드세요. 그 다음 주요 내용을 간결하게 요약하는 단락을 작성하세요.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-6"&gt;audience &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"이 요약은 최신 AI 개발 동향을 빠르게 파악해야 하는 바쁜 개발자들을 위한 것입니다.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-7"&gt;tone &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"전문적이고 명확한 톤을 사용해야 합니다.&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-8"&gt;&lt;/span&gt;
&lt;span id="cb22-9"&gt;&lt;span&gt;# 아래 내용을 원하는 문장으로 변경했습니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-10"&gt;text &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-11"&gt;&lt;span&gt;머신러닝 모델의 성능을 향상시키는 방법 중 하나는 앙상블 학습입니다. 앙상블 학습은 여러 개의 모델을 조합하여 더 나은 예측 결과를 얻는 방법입니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-12"&gt;&lt;span&gt;주요 앙상블 기법으로는 배깅(Bagging), 부스팅(Boosting), 스태킹(Stacking)이 있습니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-13"&gt;&lt;span&gt;배깅은 동일한 알고리즘을 사용하지만 서로 다른 학습 데이터 부분집합으로 여러 모델을 학습시키는 방법입니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-14"&gt;&lt;span&gt;부스팅은 이전 모델의 오류를 보완하는 방향으로 순차적으로 모델을 학습시키는 방법입니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-15"&gt;&lt;span&gt;스태킹은 여러 모델의 예측 결과를 새로운 모델의 입력으로 사용하여 최종 예측을 수행하는 방법입니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-16"&gt;&lt;span&gt;이러한 앙상블 기법들은 단일 모델보다 일반적으로 더 높은 성능과 안정성을 제공합니다.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-17"&gt;&lt;span&gt;"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-18"&gt;&lt;/span&gt;
&lt;span id="cb22-19"&gt;data &lt;span&gt;=&lt;/span&gt; &lt;span&gt;f"요약할 텍스트: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;text&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-20"&gt;&lt;/span&gt;
&lt;span id="cb22-21"&gt;&lt;span&gt;# 전체 프롬프트 - 생성된 출력에 미치는 영향을 보기 위해 부분을 제거하거나 추가할 수 있습니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-22"&gt;query &lt;span&gt;=&lt;/span&gt; persona &lt;span&gt;+&lt;/span&gt; instruction &lt;span&gt;+&lt;/span&gt; context &lt;span&gt;+&lt;/span&gt; data_format &lt;span&gt;+&lt;/span&gt; audience &lt;span&gt;+&lt;/span&gt; tone &lt;span&gt;+&lt;/span&gt; data&lt;/span&gt;
&lt;span id="cb22-23"&gt;&lt;/span&gt;
&lt;span id="cb22-24"&gt;messages &lt;span&gt;=&lt;/span&gt; [{&lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;, &lt;span&gt;"content"&lt;/span&gt;: query}]&lt;/span&gt;
&lt;span id="cb22-25"&gt;&lt;span&gt;print&lt;/span&gt;(tokenizer.apply_chat_template(messages, tokenize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb22-26"&gt;&lt;/span&gt;
&lt;span id="cb22-27"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb22-28"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(messages)&lt;/span&gt;
&lt;span id="cb22-29"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;|user|&amp;gt;
당신은 인공지능과 기계학습 분야의 전문가입니다. 복잡한 기술 문서를 쉽게 이해할 수 있는 요약으로 만드는 데 탁월합니다.
제공된 기술 문서의 핵심 내용을 요약해주세요.
요약은 개발자들이 문서의 가장 중요한 정보를 빠르게 파악할 수 있도록 핵심 포인트를 추출해야 합니다.
주요 개념과 기술을 설명하는 글머리 기호 요약을 만드세요. 그 다음 주요 내용을 간결하게 요약하는 단락을 작성하세요.
이 요약은 최신 AI 개발 동향을 빠르게 파악해야 하는 바쁜 개발자들을 위한 것입니다.
전문적이고 명확한 톤을 사용해야 합니다.
요약할 텍스트: 
머신러닝 모델의 성능을 향상시키는 방법 중 하나는 앙상블 학습입니다. 앙상블 학습은 여러 개의 모델을 조합하여 더 나은 예측 결과를 얻는 방법입니다.
주요 앙상블 기법으로는 배깅(Bagging), 부스팅(Boosting), 스태킹(Stacking)이 있습니다.
배깅은 동일한 알고리즘을 사용하지만 서로 다른 학습 데이터 부분집합으로 여러 모델을 학습시키는 방법입니다.
부스팅은 이전 모델의 오류를 보완하는 방향으로 순차적으로 모델을 학습시키는 방법입니다.
스태킹은 여러 모델의 예측 결과를 새로운 모델의 입력으로 사용하여 최종 예측을 수행하는 방법입니다.
이러한 앙상블 기법들은 단일 모델보다 일반적으로 더 높은 성능과 안정성을 제공합니다.
&amp;lt;|end|&amp;gt;
&amp;lt;|endoftext|&amp;gt;
 **요약: 앙상블 학습을 통한 머신러닝 성능 향상**

*글머리기호 요약:*
- 앙상블 학습: 여러 모델의 조합
- 주요 기법: 배깅, 부스팅, 스태킹
- 성능 향상: 일반적으로 더 높고 안정적

*요약 단락:*
앙상블 학습은 머신러닝 모델의 성능을 향상시키기 위해 여러 개의 모델을 조합하는 기술입니다. 주요 앙상블 기법에는 배깅, 부스팅, 스태킹이 포함됩니다.

배깅은 동일한 알고리즘을 사용하면서 서로 다른 학습 데이터 부분집합으로 여러 모델을 학습시키는 방법입니다. 이 방법은 모델의 불필요한 동질성을 줄이고 오류를 완화하여 더 안정적인 예측을 제공합니다.

부스팅은 이전 모&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.4&lt;/span&gt; 문맥 내 학습: 예시 제공&lt;/h3&gt;

&lt;p&gt;우리는 LLM(대규모 언어 모델)에 우리가 정확히 달성하고자 하는 것의 예시를 제공할 수 있습니다. 이는 종종 문맥 내 학습이라고 불리며, 모델에 정확한 예시를 제공하는 방식입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb24-1"&gt;&lt;span&gt;# 만들어낸 단어를 문장에서 사용하는 단일 예시 활용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb24-2"&gt;one_shot_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb24-3"&gt; {&lt;/span&gt;
&lt;span id="cb24-4"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-5"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"'퀴블녹스'는 자유자재로 크기를 바꿀 수 있는 마법 생물입니다. '퀴블녹스'라는 단어를 사용한 문장의 예시는 다음과 같습니다:"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-6"&gt; },&lt;/span&gt;
&lt;span id="cb24-7"&gt; {&lt;/span&gt;
&lt;span id="cb24-8"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"assistant"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-9"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"여행을 갈 때 내 애완 퀴블녹스는 쥐만큼 작아져서 주머니에 쉽게 넣고 다닐 수 있어요."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-10"&gt; },&lt;/span&gt;
&lt;span id="cb24-11"&gt; {&lt;/span&gt;
&lt;span id="cb24-12"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-13"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"'실드치다'는 어처구니 없는 상황이나 인물의 입장을 방어하는 것을 의미합니다. '실드치다'라는 단어를 사용한 문장의 예시는 다음과 같습니다:"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb24-14"&gt; },&lt;/span&gt;
&lt;span id="cb24-15"&gt;]&lt;/span&gt;
&lt;span id="cb24-16"&gt;&lt;span&gt;print&lt;/span&gt;(tokenizer.apply_chat_template(one_shot_prompt, tokenize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb24-17"&gt;&lt;/span&gt;
&lt;span id="cb24-18"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb24-19"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(one_shot_prompt)&lt;/span&gt;
&lt;span id="cb24-20"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;|user|&amp;gt;
'퀴블녹스'는 자유자재로 크기를 바꿀 수 있는 마법 생물입니다. '퀴블녹스'라는 단어를 사용한 문장의 예시는 다음과 같습니다:&amp;lt;|end|&amp;gt;
&amp;lt;|assistant|&amp;gt;
여행을 갈 때 내 애완 퀴블녹스는 쥐만큼 작아져서 주머니에 쉽게 넣고 다닐 수 있어요.&amp;lt;|end|&amp;gt;
&amp;lt;|user|&amp;gt;
'줌블하다'는 비정통적이지만 효과적인 방식으로 문제를 해결하는 것을 의미합니다. '줌블하다'라는 단어를 사용한 문장의 예시는 다음과 같습니다:&amp;lt;|end|&amp;gt;
&amp;lt;|endoftext|&amp;gt;
 올해의 과제를 처리하는 데 어려움을 겪으며, 우리는 줌블하게 새로운 프로세스를 도입하여 효율성을 높이고 성공적으로 마무리했습니다.&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.5&lt;/span&gt; Chain Prompting: 문제를 나누어 해결하기&lt;/h3&gt;

&lt;p&gt;문제를 하나의 프롬프트 내에서 해결하는 대신, 여러 프롬프트 사이에서 나누어 해결할 수 있습니다. 본질적으로 이 방법은 한 프롬프트의 출력을 다음 프롬프트의 입력으로 사용하여 문제를 해결하는 연속적인 상호작용 체인을 만드는 것입니다. Chain Prompting은 특히 다단계 추론, 복잡한 분석, 또는 여러 도메인의 지식을 결합해야 하는 작업에서 효과적입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb26-1"&gt;&lt;span&gt;# 스마트홈 기기의 이름과 슬로건 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb26-2"&gt;product_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb26-3"&gt; {&lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;, &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"스마트홈 기기의 이름과 슬로건을 만들어주세요."&lt;/span&gt;}&lt;/span&gt;
&lt;span id="cb26-4"&gt;]&lt;/span&gt;
&lt;span id="cb26-5"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(product_prompt)&lt;/span&gt;
&lt;span id="cb26-6"&gt;product_description &lt;span&gt;=&lt;/span&gt; outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb26-7"&gt;&lt;span&gt;print&lt;/span&gt;(product_description)&lt;/span&gt;
&lt;span id="cb26-8"&gt;&lt;/span&gt;
&lt;span id="cb26-9"&gt;&lt;span&gt;# 생성된 제품 이름과 슬로건을 바탕으로 짧은 판매 문구 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb26-10"&gt;sales_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb26-11"&gt; {&lt;/span&gt;
&lt;span id="cb26-12"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb26-13"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;f"다음 제품에 대한 매우 짧은 판매 문구를 생성해주세요: '&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;product_description&lt;span&gt;}&lt;/span&gt;&lt;span&gt;'"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb26-14"&gt; }&lt;/span&gt;
&lt;span id="cb26-15"&gt;]&lt;/span&gt;
&lt;span id="cb26-16"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(sales_prompt)&lt;/span&gt;
&lt;span id="cb26-17"&gt;sales_pitch &lt;span&gt;=&lt;/span&gt; outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb26-18"&gt;&lt;span&gt;print&lt;/span&gt;(sales_pitch)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 이름: "SmartHaven"

슬로건: "SmartHaven - 디지털 편안함, 현실 속 편안한 집."

SmartHaven는 스마트홈 기기의 편안함과 효율성을 실현하는 최첨단 기기로, 집의 모든 영역에서 디지털 혁신을 제공합니다. 이 기기는 생활의 질을 향상시키고, 집의 안전성을 강화하며, 사용자의 생활을 효율적이고 편안하게 만듭니다. SmartHaven의 디지털 편안함과 현실 속 편안한 집을 상징하는 슬로건은 이러한 기능을 강조하고, 소비자들이 스마트홈의 풍부한 가치를 느낄 수 있도록 합니다.
 "SmartHaven: 현실 속 편안한 집, 디지털 편안함을 누릴 순간."&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.6&lt;/span&gt; 생성형 모델을 이용한 추론&lt;/h3&gt;

&lt;p&gt;추론은 인간 지능의 핵심 요소이며 종종 추론과 유사해 보이는 LLM의 창발적 행동과 비교됩니다. 우리가 “유사해 보이는”이라고 강조하는 이유는 이 글을 쓰는 시점에서 이러한 모델들은 일반적으로 학습 데이터의 암기와 패턴 매칭을 통해 이러한 행동을 보여주는 것으로 여겨지기 때문입니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;2.3.7&lt;/span&gt; Chain-of-Thought: 답변 전에 생각하기&lt;/h3&gt;

&lt;p&gt;Chain-of-Thought(사고 연쇄)는 생성형 모델이 질문에 직접 답변하지 않고 먼저 “생각”하도록 하는 것을 목표로 합니다.Chain-of-Thought 방식은 특히 수학 문제 풀이, 논리 퍼즐, 복잡한 의사 결정 과정 등에서 효과적으로 사용될 수 있으며, 모델의 추론 능력을 향상시키는 데 도움이 됩니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb28-1"&gt;&lt;span&gt;# 명시적인 추론 없이 답변하기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb28-2"&gt;standard_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb28-3"&gt; {&lt;/span&gt;
&lt;span id="cb28-4"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb28-5"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"민수는 색연필 12자루를 가지고 있었습니다. 새 색연필 세트를 받았는데, 그 세트에는 8자루가 들어있었습니다. 그런데 3자루를 동생에게 주었습니다. 민수는 지금 몇 자루의 색연필을 가지고 있나요?"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb28-6"&gt; },&lt;/span&gt;
&lt;span id="cb28-7"&gt; {&lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"assistant"&lt;/span&gt;, &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"17"&lt;/span&gt;},&lt;/span&gt;
&lt;span id="cb28-8"&gt; {&lt;/span&gt;
&lt;span id="cb28-9"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb28-10"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"학교 도서관에 책이 300권 있었습니다. 새로운 책 50권을 구입했고, 학생들이 25권을 빌려갔습니다. 지금 도서관에 있는 책은 몇 권인가요?"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb28-11"&gt; },&lt;/span&gt;
&lt;span id="cb28-12"&gt;]&lt;/span&gt;
&lt;span id="cb28-13"&gt;&lt;/span&gt;
&lt;span id="cb28-14"&gt;&lt;span&gt;# 생성 모델 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb28-15"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(standard_prompt)&lt;/span&gt;
&lt;span id="cb28-16"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 325권

이 문제를 해결하기 위해서는 다음 단계를 따릅니다:

1. 도서관에는 초기에 300권의 책이 있었습니다.
2. 새로운 책 50권을 도서관에 추가했습니다. 이를 기존의 총 권수에 더합니다: 300 + 50 = 350권.
3. 그런 다음, 학생들이 25권을 빌려갔습니다. 이를 현재의 총 권수에서 빼야 합니다: 350 - 25 = 325권.

따라서, 도서관에는 현재 325권의 책이 남아 있습니다.&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb30-1"&gt;&lt;span&gt;# 사고 과정을 포함한 답변&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb30-2"&gt;cot_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb30-3"&gt; {&lt;/span&gt;
&lt;span id="cb30-4"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-5"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"민수는 색연필 12자루를 가지고 있었습니다. 새 색연필 세트를 2개 받았는데, 각 세트에는 5자루가 들어있었습니다. 민수는 지금 몇 자루의 색연필을 가지고 있나요?"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-6"&gt; },&lt;/span&gt;
&lt;span id="cb30-7"&gt; {&lt;/span&gt;
&lt;span id="cb30-8"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"assistant"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-9"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"민수는 처음에 12자루의 색연필을 가지고 있었습니다. 2개의 새 세트에 각각 5자루씩 들어있으므로 10자루를 추가로 받았습니다. 12 + 10 = 22. 따라서 답은 22자루입니다."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-10"&gt; },&lt;/span&gt;
&lt;span id="cb30-11"&gt; {&lt;/span&gt;
&lt;span id="cb30-12"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-13"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"학교 도서관에 책이 45권 있었습니다. 15권을 학생들에게 대출해주고 새로운 책 20권을 구입했습니다. 지금 도서관에 있는 책은 몇 권인가요?"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb30-14"&gt; },&lt;/span&gt;
&lt;span id="cb30-15"&gt;]&lt;/span&gt;
&lt;span id="cb30-16"&gt;&lt;/span&gt;
&lt;span id="cb30-17"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb30-18"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(cot_prompt)&lt;/span&gt;
&lt;span id="cb30-19"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 도서관에서는 처음에 45권의 책이 있었습니다. 15권을 학생들에게 대출했으므로 45 - 15 = 30권이 남았습니다. 그런 다음 20권의 새로운 책을 구입했으므로 30 + 20 = 50권의 책이 지금 도서관에 있습니다. 따라서 답은 50권입니다.&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.8&lt;/span&gt; 제로샷 Chain-of-Thought&lt;/h3&gt;

&lt;p&gt;모델에게 예시를 제공하는 대신에 우리는 단순히 생성형 모델에게 추론 과정을 제공하도록 요청할 수 있습니다(제로샷 chain-of-thought). 이를 위해 효과적인 다양한 형태가 있지만 흔하고 효과적인 방법 중 하나는 “단계별로 생각해 봅시다”라는 문구를 사용하는 것입니다. 이 방법은 특히 다양한 유형의 문제에 대해 빠르게 추론 과정을 얻고자 할 때 유용하며, 모델의 일반화된 추론 능력을 테스트하는 데에도 효과적입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb32-1"&gt;&lt;span&gt;# Zero-shot Chain-of-Thought&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb32-2"&gt;zeroshot_cot_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb32-3"&gt; {&lt;/span&gt;
&lt;span id="cb32-4"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb32-5"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"도서관에 책이 50권 있었습니다. 15권을 대출해주고 새로 20권을 구입했습니다. 지금 도서관에 있는 책은 몇 권인가요? 단계별로 생각해봅시다."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb32-6"&gt; }&lt;/span&gt;
&lt;span id="cb32-7"&gt;]&lt;/span&gt;
&lt;span id="cb32-8"&gt;&lt;/span&gt;
&lt;span id="cb32-9"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb32-10"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(zeroshot_cot_prompt)&lt;/span&gt;
&lt;span id="cb32-11"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 이 문제를 해결하기 위해 다음 단계를 따르겠습니다:

1. 도서관에서 시작하는 초기 책 수: 50권
2. 대출된 책 수: 15권
3. 구입한 새로운 책 수: 20권

이제 이 값을 사용하여 현재 도서관에 있는 책 수를 계산해봅시다:

1. 시작하는 초기 책 수에서 대출된 책 수를 빼줍니다: 50 - 15 = 35권
2. 이 결과에 구입한 새로운 책 수를 더합니다: 35 + 20 = 55권

따라서, 현재 도서관에는 55권의 책이 있습니다.&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.3.9&lt;/span&gt; Tree-of-Thought: 중간 단계 탐색하기&lt;/h3&gt;

&lt;p&gt;Chain-of-Thought와 자기 일관성(self-consistency)의 개념은 더 복잡한 추론을 가능하게 하기 위한 것입니다. 여러 “생각”들을 샘플링하고 이를 더 신중하게 만듦으로써 생성형 모델의 출력을 개선합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb34-1"&gt;&lt;span&gt;# Zero-shot Tree-of-Thought&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb34-2"&gt;zeroshot_tot_prompt &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb34-3"&gt; {&lt;/span&gt;
&lt;span id="cb34-4"&gt; &lt;span&gt;"role"&lt;/span&gt;: &lt;span&gt;"user"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb34-5"&gt; &lt;span&gt;"content"&lt;/span&gt;: &lt;span&gt;"세 명의 다른 전문가들이 이 질문에 답하고 있다고 상상해보세요. 모든 전문가는 자신의 생각의 1단계를 적은 다음 그룹과 공유합니다. 그런 다음 모든 전문가는 다음 단계로 넘어갑니다. 만약 어느 전문가라도 자신이 틀렸다는 것을 깨닫게 되면 그 즉시 토론에서 빠집니다. 질문은 '학교 도서관에 책이 80권 있었습니다. 30권을 학생들에게 대출해주고 새로운 책 25권을 구입했습니다. 지금 도서관에 있는 책은 몇 권인가요?' 입니다. 결과에 대해 반드시 토론해주세요."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb34-6"&gt; }&lt;/span&gt;
&lt;span id="cb34-7"&gt;]&lt;/span&gt;
&lt;span id="cb34-8"&gt;&lt;/span&gt;
&lt;span id="cb34-9"&gt;&lt;span&gt;# 출력 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb34-10"&gt;outputs &lt;span&gt;=&lt;/span&gt; pipe(zeroshot_tot_prompt)&lt;/span&gt;
&lt;span id="cb34-11"&gt;&lt;span&gt;print&lt;/span&gt;(outputs[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;])&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt; 1단계: 초기 책 수를 기억하기
80권의 책이 도서관에서 시작합니다.

2단계: 학생들에게 대출된 책 수를 계산하기
30권의 책이 학생들에게 대출됩니다.

3단계: 구입한 새로운 책 수를 계산하기
25권의 새로운 책이 도서관에 추가됩니다.

4단계: 현재 책 수를 계산하기
1단계에서 시작한 80권에서 2단계의 30권을 빼고, 그리고 3단계의 25권을 더합니다.

80 - 30 = 50
50 + 25 = 75

토론:
도서관에는 80권의 책이 시작되었습니다. 그 다음, 30권의 책이 학생들에게 대출되었습니다. 이로 인해 도서관에는 50권의 책이 남았습니다. 그 다음, 25권의 새로운 책이 도서관에 추가되었습니다. 따&lt;/code&gt;&lt;/pre&gt;



&lt;h2&gt;
&lt;span&gt;2.4&lt;/span&gt; 의미론적 검색 및 검색 증강 생성&lt;/h2&gt;

&lt;h3&gt;
&lt;span&gt;2.4.1&lt;/span&gt; 밀집 검색 (Dense Retrieval)&lt;/h3&gt;

&lt;p&gt;밀집 검색은 검색 쿼리가 관련 결과와 가까울 것이라는 특성에 의존합니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;2.4.1.1&lt;/span&gt; 밀집 검색 주의사항&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;거짓 양성: 의미적으로 유사하지만 실제로 관련이 없는 결과를 반환할 수 있습니다.&lt;/li&gt;
&lt;li&gt;답변 부재: 코퍼스에 답변이 없는 경우에도 가장 가까운 결과를 반환합니다.&lt;/li&gt;
&lt;li&gt;컨텍스트 손실: 단어의 정확한 일치보다는 의미적 유사성에 중점을 두기 때문에 특정 키워드나 구문을 놓칠 수 있습니다.&lt;/li&gt;
&lt;li&gt;계산 비용: 대규모 데이터셋에서는 계산 비용이 높을 수 있습니다.&lt;/li&gt;
&lt;li&gt;도메인 특화의 어려움: 특정 도메인의 전문 용어나 개념을 정확히 포착하기 어려울 수 있습니다.&lt;/li&gt;
&lt;/ul&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb36-1"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb36-2"&gt; DPRQuestionEncoder,&lt;/span&gt;
&lt;span id="cb36-3"&gt; DPRContextEncoder,&lt;/span&gt;
&lt;span id="cb36-4"&gt; DPRQuestionEncoderTokenizer,&lt;/span&gt;
&lt;span id="cb36-5"&gt; DPRContextEncoderTokenizer,&lt;/span&gt;
&lt;span id="cb36-6"&gt;)&lt;/span&gt;
&lt;span id="cb36-7"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb36-8"&gt;&lt;/span&gt;
&lt;span id="cb36-9"&gt;question_model &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"facebook/dpr-question_encoder-single-nq-base"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-10"&gt;context_model &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"facebook/dpr-ctx_encoder-single-nq-base"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-11"&gt;&lt;/span&gt;
&lt;span id="cb36-12"&gt;&lt;span&gt;# 인코더와 토크나이저 초기화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-13"&gt;question_encoder &lt;span&gt;=&lt;/span&gt; DPRQuestionEncoder.from_pretrained(question_model)&lt;/span&gt;
&lt;span id="cb36-14"&gt;question_tokenizer &lt;span&gt;=&lt;/span&gt; DPRQuestionEncoderTokenizer.from_pretrained(question_model)&lt;/span&gt;
&lt;span id="cb36-15"&gt;context_encoder &lt;span&gt;=&lt;/span&gt; DPRContextEncoder.from_pretrained(context_model)&lt;/span&gt;
&lt;span id="cb36-16"&gt;context_tokenizer &lt;span&gt;=&lt;/span&gt; DPRContextEncoderTokenizer.from_pretrained(context_model)&lt;/span&gt;
&lt;span id="cb36-17"&gt;&lt;/span&gt;
&lt;span id="cb36-18"&gt;&lt;span&gt;# 질문 인코딩&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-19"&gt;question &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"듄의 작가는 누구인가요?"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-20"&gt;question_input &lt;span&gt;=&lt;/span&gt; question_tokenizer(question, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb36-21"&gt;question_embedding &lt;span&gt;=&lt;/span&gt; question_encoder(&lt;span&gt;**&lt;/span&gt;question_input).pooler_output&lt;/span&gt;
&lt;span id="cb36-22"&gt;&lt;/span&gt;
&lt;span id="cb36-23"&gt;&lt;span&gt;# 컨텍스트 인코딩&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-24"&gt;context &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"듄은 1965년에 미국 작가 프랭크 허버트가 쓴 공상과학 소설입니다."&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-25"&gt;context_input &lt;span&gt;=&lt;/span&gt; context_tokenizer(context, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb36-26"&gt;context_embedding &lt;span&gt;=&lt;/span&gt; context_encoder(&lt;span&gt;**&lt;/span&gt;context_input).pooler_output&lt;/span&gt;
&lt;span id="cb36-27"&gt;&lt;/span&gt;
&lt;span id="cb36-28"&gt;&lt;span&gt;# 유사도 계산&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb36-29"&gt;similarity &lt;span&gt;=&lt;/span&gt; torch.matmul(question_embedding, context_embedding.transpose(&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb36-30"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"유사도 점수: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;similarity&lt;span&gt;.&lt;/span&gt;item()&lt;span&gt;:.4f}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;유사도 점수: 75.5189&lt;/code&gt;&lt;/pre&gt;



&lt;h3&gt;
&lt;span&gt;2.4.2&lt;/span&gt; 재순위화 예시&lt;/h3&gt;

&lt;p&gt;재순위화 시스템(예: monoBERT)은 사용자의 검색어와 후보 결과들을 분석하여 각 문서가 해당 검색어와 얼마나 관련이 있는지 점수를 매깁니다. 이렇게 산출된 관련성 점수를 바탕으로 사전에 선별된 결과들의 순서를 재배열합니다. 이 과정을 통해 검색어에 대한 결과의 순위가 개선되어 더욱 정확하고 관련성 높은 정보를 상위에 표시할 수 있게 됩니다.&lt;/p&gt;

&lt;p&gt;재순위화 시스템의 주요 특징은 다음과 같습니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;정교한 관련성 평가&lt;/strong&gt;: 단순한 키워드 매칭을 넘어 문맥과 의미를 고려한 심층적인 관련성 평가를 수행합니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;맞춤형 순위 조정&lt;/strong&gt;: 사용자의 검색 의도를 더 정확히 반영하여 결과의 순위를 조정합니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;검색 품질 향상&lt;/strong&gt;: 사용자에게 더 관련성 높고 유용한 정보를 우선적으로 제공함으로써 전반적인 검색 경험을 개선합니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;다양한 요소 고려&lt;/strong&gt;: 문서의 내용, 구조, 메타데이터 등 다양한 요소를 종합적으로 분석하여 순위를 결정합니다.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb38-1"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; AutoTokenizer, AutoModelForSequenceClassification&lt;/span&gt;
&lt;span id="cb38-2"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb38-3"&gt;&lt;/span&gt;
&lt;span id="cb38-4"&gt;model_name &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"cross-encoder/ms-marco-MiniLM-L-6-v2"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-5"&gt;&lt;span&gt;# 재순위화 모델 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-6"&gt;tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(model_name)&lt;/span&gt;
&lt;span id="cb38-7"&gt;model &lt;span&gt;=&lt;/span&gt; AutoModelForSequenceClassification.from_pretrained(model_name)&lt;/span&gt;
&lt;span id="cb38-8"&gt;&lt;/span&gt;
&lt;span id="cb38-9"&gt;&lt;span&gt;# 예시 쿼리와 검색된 문단들&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-10"&gt;query &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"프랑스의 수도는 어디인가요?"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-11"&gt;passages &lt;span&gt;=&lt;/span&gt; [&lt;/span&gt;
&lt;span id="cb38-12"&gt; &lt;span&gt;"파리는 프랑스의 수도이자 가장 인구가 많은 도시입니다."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb38-13"&gt; &lt;span&gt;"런던은 영국과 잉글랜드의 수도입니다."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb38-14"&gt; &lt;span&gt;"프랑스는 서유럽에 위치한 국가입니다."&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb38-15"&gt;]&lt;/span&gt;
&lt;span id="cb38-16"&gt;&lt;/span&gt;
&lt;span id="cb38-17"&gt;&lt;span&gt;# 문단 재순위화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-18"&gt;pairs &lt;span&gt;=&lt;/span&gt; [[query, passage] &lt;span&gt;for&lt;/span&gt; passage &lt;span&gt;in&lt;/span&gt; passages]&lt;/span&gt;
&lt;span id="cb38-19"&gt;inputs &lt;span&gt;=&lt;/span&gt; tokenizer(pairs, padding&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, truncation&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb38-20"&gt;&lt;span&gt;with&lt;/span&gt; torch.no_grad():&lt;/span&gt;
&lt;span id="cb38-21"&gt; scores &lt;span&gt;=&lt;/span&gt; model(&lt;span&gt;**&lt;/span&gt;inputs).logits.squeeze(&lt;span&gt;-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb38-22"&gt;&lt;/span&gt;
&lt;span id="cb38-23"&gt;&lt;span&gt;# 점수에 따라 문단 정렬&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb38-24"&gt;reranked_passages &lt;span&gt;=&lt;/span&gt; [p &lt;span&gt;for&lt;/span&gt; _, p &lt;span&gt;in&lt;/span&gt; &lt;span&gt;sorted&lt;/span&gt;(&lt;span&gt;zip&lt;/span&gt;(scores, passages), reverse&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)]&lt;/span&gt;
&lt;span id="cb38-25"&gt;&lt;/span&gt;
&lt;span id="cb38-26"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;"재순위화된 문단:"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb38-27"&gt;&lt;span&gt;for&lt;/span&gt; i, passage &lt;span&gt;in&lt;/span&gt; &lt;span&gt;enumerate&lt;/span&gt;(reranked_passages, &lt;span&gt;1&lt;/span&gt;):&lt;/span&gt;
&lt;span id="cb38-28"&gt; &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;i&lt;span&gt;}&lt;/span&gt;&lt;span&gt;. &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;passage&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;재순위화된 문단:
1. 파리는 프랑스의 수도이자 가장 인구가 많은 도시입니다.
2. 런던은 영국과 잉글랜드의 수도입니다.
3. 프랑스는 서유럽에 위치한 국가입니다.&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.4.3&lt;/span&gt; RAG(검색 증강 생성)&lt;/h3&gt;

&lt;p&gt;RAG는 검색 시스템의 파이프라인 끝단에 생성형 대규모 언어 모델(LLM)을 배치하는 혁신적인 접근 방식입니다. 이 방법을 통해 시스템은 검색된 문서를 바탕으로 답변을 생성하면서 동시에 출처를 인용할 수 있습니다. RAG의 주요 특징과 장점은 다음과 같습니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;정보의 정확성과 최신성&lt;/strong&gt;: 실시간으로 검색된 최신 정보를 바탕으로 답변을 생성하므로, 항상 최신의 정확한 정보를 제공할 수 있습니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;근거 기반 응답&lt;/strong&gt;: 생성된 답변의 각 부분에 대해 출처를 제시함으로써, 사용자는 정보의 신뢰성을 직접 확인할 수 있습니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;유연한 지식 확장&lt;/strong&gt;: 모델의 재학습 없이도 새로운 정보를 즉시 활용할 수 있어, 지식 기반을 지속적으로 확장할 수 있습니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;맥락 이해 능력 향상&lt;/strong&gt;: 검색된 문서들의 맥락을 종합적으로 이해하여 더 깊이 있고 관련성 높은 답변을 생성합니다.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;투명성 제고&lt;/strong&gt;: 정보의 출처를 명확히 제시함으로써 AI 시스템의 의사결정 과정을 더 투명하게 만듭니다.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb40-1"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb40-2"&gt; AutoTokenizer,&lt;/span&gt;
&lt;span id="cb40-3"&gt; AutoModel,&lt;/span&gt;
&lt;span id="cb40-4"&gt; RagRetriever,&lt;/span&gt;
&lt;span id="cb40-5"&gt; RagSequenceForGeneration,&lt;/span&gt;
&lt;span id="cb40-6"&gt;)&lt;/span&gt;
&lt;span id="cb40-7"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb40-8"&gt;&lt;/span&gt;
&lt;span id="cb40-9"&gt;&lt;span&gt;# 사전 훈련된 모델 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-10"&gt;question_encoder &lt;span&gt;=&lt;/span&gt; AutoModel.from_pretrained(&lt;/span&gt;
&lt;span id="cb40-11"&gt; &lt;span&gt;"facebook/dpr-question_encoder-single-nq-base"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-12"&gt;)&lt;/span&gt;
&lt;span id="cb40-13"&gt;question_tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(&lt;/span&gt;
&lt;span id="cb40-14"&gt; &lt;span&gt;"facebook/dpr-question_encoder-single-nq-base"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-15"&gt;)&lt;/span&gt;
&lt;span id="cb40-16"&gt;generator_tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(&lt;span&gt;"facebook/bart-large"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb40-17"&gt;&lt;/span&gt;
&lt;span id="cb40-18"&gt;&lt;span&gt;# RAG 컴포넌트 초기화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-19"&gt;retriever &lt;span&gt;=&lt;/span&gt; RagRetriever.from_pretrained(&lt;/span&gt;
&lt;span id="cb40-20"&gt; &lt;span&gt;"facebook/rag-sequence-nq"&lt;/span&gt;, index_name&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"exact"&lt;/span&gt;, use_dummy_dataset&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-21"&gt;)&lt;/span&gt;
&lt;span id="cb40-22"&gt;model &lt;span&gt;=&lt;/span&gt; RagSequenceForGeneration.from_pretrained(&lt;/span&gt;
&lt;span id="cb40-23"&gt; &lt;span&gt;"facebook/rag-sequence-nq"&lt;/span&gt;, retriever&lt;span&gt;=&lt;/span&gt;retriever&lt;/span&gt;
&lt;span id="cb40-24"&gt;)&lt;/span&gt;
&lt;span id="cb40-25"&gt;&lt;/span&gt;
&lt;span id="cb40-26"&gt;&lt;/span&gt;
&lt;span id="cb40-27"&gt;&lt;span&gt;def&lt;/span&gt; generate_answer(query):&lt;/span&gt;
&lt;span id="cb40-28"&gt; &lt;span&gt;# 쿼리 인코딩&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-29"&gt; input_ids &lt;span&gt;=&lt;/span&gt; question_tokenizer(query, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)[&lt;span&gt;"input_ids"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb40-30"&gt; question_hidden_states &lt;span&gt;=&lt;/span&gt; question_encoder(input_ids)[&lt;span&gt;0&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb40-31"&gt;&lt;/span&gt;
&lt;span id="cb40-32"&gt; &lt;span&gt;# 관련 문서 검색&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-33"&gt; retriever_output &lt;span&gt;=&lt;/span&gt; retriever(input_ids, question_hidden_states, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb40-34"&gt;&lt;/span&gt;
&lt;span id="cb40-35"&gt; &lt;span&gt;# 답변 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-36"&gt; input_ids &lt;span&gt;=&lt;/span&gt; retriever_output[&lt;span&gt;"input_ids"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb40-37"&gt; attention_mask &lt;span&gt;=&lt;/span&gt; retriever_output[&lt;span&gt;"attention_mask"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb40-38"&gt; output &lt;span&gt;=&lt;/span&gt; model.generate(input_ids&lt;span&gt;=&lt;/span&gt;input_ids, attention_mask&lt;span&gt;=&lt;/span&gt;attention_mask)&lt;/span&gt;
&lt;span id="cb40-39"&gt;&lt;/span&gt;
&lt;span id="cb40-40"&gt; &lt;span&gt;# 생성된 답변 디코딩 및 반환&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-41"&gt; &lt;span&gt;return&lt;/span&gt; generator_tokenizer.decode(output[&lt;span&gt;0&lt;/span&gt;], skip_special_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb40-42"&gt;&lt;/span&gt;
&lt;span id="cb40-43"&gt;&lt;/span&gt;
&lt;span id="cb40-44"&gt;&lt;span&gt;# 사용 예시&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-45"&gt;query &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"프랑스의 수도는 어디인가요?"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb40-46"&gt;answer &lt;span&gt;=&lt;/span&gt; generate_answer(query)&lt;/span&gt;
&lt;span id="cb40-47"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"질문: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;query&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb40-48"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"답변: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;answer&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;h2&gt;
&lt;span&gt;2.5&lt;/span&gt; 멀티모달 LLM&lt;/h2&gt;

&lt;p&gt;대규모 언어 모델(LLM)에서는 멀티모달 입력을 받아들이고 이를 바탕으로 추론하는 능력은 이전에는 접근하기 어려웠던 새로운 가능성을 열어줄 수 있습니다. 여기에서는 멀티모달 기능을 갖춘 여러 LLM을 살펴보고 실제 사용 사례로 어떤 의미를 갖는지 알아볼 것입니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;2.5.1&lt;/span&gt; CLIP(텍스트와 이미지 연결)&lt;/h3&gt;

&lt;p&gt;CLIP은 이미지와 텍스트 모두의 임베딩을 계산할 수 있는 임베딩 모델입니다. CLIP은 컴퓨터 비전과 자연어 처리의 경계를 허물고 두 영역을 통합하는 강력한 도구로 자리잡고 있습니다. 이를 통해 AI 시스템은 인간의 의사소통 방식에 더 가까워지고 더욱 자연스럽고 직관적인 상호작용이 가능해집니다. CLIP의 주요 특징은 다음과 같습니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;통합된 표현 공간&lt;/strong&gt;: 이미지와 텍스트를 동일한 벡터 공간에 표현하여 직접적인 비교가 가능합니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;크로스모달 학습&lt;/strong&gt;: 이미지와 텍스트 사이의 관계를 학습하여 더 풍부한 이해를 가능하게 합니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;유연한 응용&lt;/strong&gt;: 이미지 검색, 이미지 캡셔닝, 시각적 질의응답 등 다양한 작업에 활용될 수 있습니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;제로샷 학습 능력&lt;/strong&gt;: 특정 작업에 대한 추가 학습 없이도 새로운 개념을 인식할 수 있습니다.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb41-1"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; CLIPTokenizerFast, CLIPProcessor, CLIPModel&lt;/span&gt;
&lt;span id="cb41-2"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb41-3"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb41-4"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb41-5"&gt;&lt;span&gt;from&lt;/span&gt; urllib.request &lt;span&gt;import&lt;/span&gt; urlopen&lt;/span&gt;
&lt;span id="cb41-6"&gt;&lt;span&gt;from&lt;/span&gt; PIL &lt;span&gt;import&lt;/span&gt; Image&lt;/span&gt;
&lt;span id="cb41-7"&gt;&lt;/span&gt;
&lt;span id="cb41-8"&gt;&lt;span&gt;# 이미지 불러오기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-9"&gt;image_url &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"https://raw.githubusercontent.com/HandsOnLLM/Hands-On-Large-Language-Models/main/chapter09/images/puppy.png"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-10"&gt;image &lt;span&gt;=&lt;/span&gt; Image.&lt;span&gt;open&lt;/span&gt;(urlopen(image_url)).convert(&lt;span&gt;"RGB"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-11"&gt;caption &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"A ppuppy playing in the snow"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-12"&gt;model_id &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"openai/clip-vit-base-patch32"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-13"&gt;&lt;/span&gt;
&lt;span id="cb41-14"&gt;&lt;span&gt;# 텍스트 전처리를 위한 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-15"&gt;clip_tokenizer &lt;span&gt;=&lt;/span&gt; CLIPTokenizerFast.from_pretrained(model_id)&lt;/span&gt;
&lt;span id="cb41-16"&gt;&lt;/span&gt;
&lt;span id="cb41-17"&gt;&lt;span&gt;# 이미지 전처리를 위한 프로세서 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-18"&gt;clip_processor &lt;span&gt;=&lt;/span&gt; CLIPProcessor.from_pretrained(model_id)&lt;/span&gt;
&lt;span id="cb41-19"&gt;&lt;/span&gt;
&lt;span id="cb41-20"&gt;&lt;span&gt;# 텍스트 및 이미지 임베딩 생성을 위한 주 모델&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-21"&gt;model &lt;span&gt;=&lt;/span&gt; CLIPModel.from_pretrained(model_id)&lt;/span&gt;
&lt;span id="cb41-22"&gt;&lt;/span&gt;
&lt;span id="cb41-23"&gt;&lt;span&gt;# 입력 토큰화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-24"&gt;inputs &lt;span&gt;=&lt;/span&gt; clip_tokenizer(caption, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-25"&gt;&lt;/span&gt;
&lt;span id="cb41-26"&gt;&lt;span&gt;# 텍스트 임베딩 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-27"&gt;text_embedding &lt;span&gt;=&lt;/span&gt; model.get_text_features(&lt;span&gt;**&lt;/span&gt;inputs)&lt;/span&gt;
&lt;span id="cb41-28"&gt;&lt;/span&gt;
&lt;span id="cb41-29"&gt;&lt;span&gt;# 이미지 전처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-30"&gt;processed_image &lt;span&gt;=&lt;/span&gt; clip_processor(text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;None&lt;/span&gt;, images&lt;span&gt;=&lt;/span&gt;image, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)[&lt;/span&gt;
&lt;span id="cb41-31"&gt; &lt;span&gt;"pixel_values"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-32"&gt;]&lt;/span&gt;
&lt;span id="cb41-33"&gt;&lt;/span&gt;
&lt;span id="cb41-34"&gt;&lt;span&gt;# 이미지 임베딩 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-35"&gt;image_embedding &lt;span&gt;=&lt;/span&gt; model.get_image_features(processed_image)&lt;/span&gt;
&lt;span id="cb41-36"&gt;&lt;/span&gt;
&lt;span id="cb41-37"&gt;&lt;span&gt;# 시각화를 위한 이미지 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-38"&gt;processed_img &lt;span&gt;=&lt;/span&gt; processed_image.squeeze(&lt;span&gt;0&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-39"&gt;processed_img &lt;span&gt;=&lt;/span&gt; processed_img.permute(&lt;span&gt;*&lt;/span&gt;torch.arange(processed_img.ndim &lt;span&gt;-&lt;/span&gt; &lt;span&gt;1&lt;/span&gt;, &lt;span&gt;-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;, &lt;span&gt;-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb41-40"&gt;processed_img &lt;span&gt;=&lt;/span&gt; np.einsum(&lt;span&gt;"ijk-&amp;gt;jik"&lt;/span&gt;, processed_img.numpy())&lt;/span&gt;
&lt;span id="cb41-41"&gt;&lt;/span&gt;
&lt;span id="cb41-42"&gt;&lt;span&gt;# 원본 이미지와 처리된 이미지 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-43"&gt;fig, (ax1, ax2) &lt;span&gt;=&lt;/span&gt; plt.subplots(&lt;span&gt;1&lt;/span&gt;, &lt;span&gt;2&lt;/span&gt;, figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;10&lt;/span&gt;, &lt;span&gt;5&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb41-44"&gt;ax1.imshow(image)&lt;/span&gt;
&lt;span id="cb41-45"&gt;ax1.set_title(&lt;span&gt;"Original Image"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-46"&gt;ax1.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-47"&gt;ax2.imshow(processed_img)&lt;/span&gt;
&lt;span id="cb41-48"&gt;ax2.set_title(&lt;span&gt;"Processed Image"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-49"&gt;ax2.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-50"&gt;plt.show()&lt;/span&gt;
&lt;span id="cb41-51"&gt;&lt;/span&gt;
&lt;span id="cb41-52"&gt;&lt;span&gt;# 임베딩 정규화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-53"&gt;text_embedding &lt;span&gt;/=&lt;/span&gt; text_embedding.norm(dim&lt;span&gt;=-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;, keepdim&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-54"&gt;image_embedding &lt;span&gt;/=&lt;/span&gt; image_embedding.norm(dim&lt;span&gt;=-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;, keepdim&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb41-55"&gt;&lt;/span&gt;
&lt;span id="cb41-56"&gt;&lt;span&gt;# 유사도 계산&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb41-57"&gt;text_embedding &lt;span&gt;=&lt;/span&gt; text_embedding.detach().cpu().numpy()&lt;/span&gt;
&lt;span id="cb41-58"&gt;image_embedding &lt;span&gt;=&lt;/span&gt; image_embedding.detach().cpu().numpy()&lt;/span&gt;
&lt;span id="cb41-59"&gt;score &lt;span&gt;=&lt;/span&gt; text_embedding &lt;span&gt;@&lt;/span&gt; image_embedding.T&lt;/span&gt;
&lt;span id="cb41-60"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"유사도 점수: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;score&lt;span&gt;.&lt;/span&gt;item()&lt;span&gt;:.4f}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-1.7922626..2.145897].&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-27-output-2.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-27-output-2.png" width="794" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;유사도 점수: 0.3006&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;2.5.2&lt;/span&gt; BLIP-2(양식 간 격차 해소)&lt;/h3&gt;

&lt;p&gt;처음부터 멀티모달 언어 모델을 만드는 것은 엄청난 컴퓨팅 파워와 데이터를 필요로 합니다. 이러한 모델을 만들려면 수십억 개의 이미지, 텍스트, 그리고 이미지-텍스트 쌍을 사용해야 합니다. 이는 쉽게 실현 가능한 일이 아닙니다. BLIP-2는 이러한 어려움을 해결하기 위해 새로운 접근 방식을 취합니다. 처음부터 아키텍처를 구축하는 대신, 사전 학습된 이미지 인코더와 사전 학습된 LLM을 연결하는 ’쿼리 트랜스포머(Q-Former)’라는 다리를 구축하여 시각-언어 간의 격차를 해소합니다. 이 방식의 주요 장점은 다음과 같습니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;효율적인 학습&lt;/strong&gt;: BLIP-2는 이미지 인코더와 LLM을 처음부터 학습할 필요 없이 연결 다리만 학습하면 됩니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;기존 기술 활용&lt;/strong&gt;: 이미 존재하는 기술과 모델을 최대한 활용하여 효율성을 높입니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;유연성&lt;/strong&gt;: 다양한 사전 학습 모델을 조합하여 사용할 수 있어, 특정 작업에 최적화된 구성을 만들 수 있습니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;성능 향상&lt;/strong&gt;: 각 분야에서 최고의 성능을 보이는 모델들을 결합함으로써 전반적인 성능을 크게 향상시킬 수 있습니다.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;자원 절약&lt;/strong&gt;: 거대한 데이터셋과 컴퓨팅 자원이 필요한 전체 모델 학습을 피할 수 있어 시간과 비용을 절약합니다.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb44-1"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; AutoProcessor, AutoModelForVisualQuestionAnswering&lt;/span&gt;
&lt;span id="cb44-2"&gt;&lt;span&gt;from&lt;/span&gt; sklearn.preprocessing &lt;span&gt;import&lt;/span&gt; MinMaxScaler&lt;/span&gt;
&lt;span id="cb44-3"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb44-4"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb44-5"&gt;&lt;span&gt;from&lt;/span&gt; urllib.request &lt;span&gt;import&lt;/span&gt; urlopen&lt;/span&gt;
&lt;span id="cb44-6"&gt;&lt;span&gt;from&lt;/span&gt; PIL &lt;span&gt;import&lt;/span&gt; Image&lt;/span&gt;
&lt;span id="cb44-7"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb44-8"&gt;&lt;/span&gt;
&lt;span id="cb44-9"&gt;&lt;span&gt;# 프로세서와 주 모델 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-10"&gt;blip_processor &lt;span&gt;=&lt;/span&gt; AutoProcessor.from_pretrained(&lt;/span&gt;
&lt;span id="cb44-11"&gt; &lt;span&gt;"Salesforce/blip2-opt-2.7b"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-12"&gt;)&lt;/span&gt;
&lt;span id="cb44-13"&gt;model &lt;span&gt;=&lt;/span&gt; AutoModelForVisualQuestionAnswering.from_pretrained(&lt;/span&gt;
&lt;span id="cb44-14"&gt; &lt;span&gt;"Salesforce/blip2-opt-2.7b"&lt;/span&gt;, torch_dtype&lt;span&gt;=&lt;/span&gt;torch.float16&lt;/span&gt;
&lt;span id="cb44-15"&gt;)&lt;/span&gt;
&lt;span id="cb44-16"&gt;&lt;/span&gt;
&lt;span id="cb44-17"&gt;&lt;span&gt;# 추론 속도 향상을 위해 모델을 GPU로 이동&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-18"&gt;device &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"cuda"&lt;/span&gt; &lt;span&gt;if&lt;/span&gt; torch.cuda.is_available() &lt;span&gt;else&lt;/span&gt; &lt;span&gt;"cpu"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-19"&gt;model.to(device)&lt;/span&gt;
&lt;span id="cb44-20"&gt;&lt;/span&gt;
&lt;span id="cb44-21"&gt;&lt;span&gt;# 슈퍼카 이미지 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-22"&gt;car_path &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"https://raw.githubusercontent.com/HandsOnLLM/Hands-On-Large-Language-Models/main/chapter09/images/car.png"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-23"&gt;image &lt;span&gt;=&lt;/span&gt; Image.&lt;span&gt;open&lt;/span&gt;(urlopen(car_path)).convert(&lt;span&gt;"RGB"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-24"&gt;&lt;/span&gt;
&lt;span id="cb44-25"&gt;&lt;span&gt;# 이미지 전처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-26"&gt;inputs &lt;span&gt;=&lt;/span&gt; blip_processor(image, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;).to(device, torch.float16)&lt;/span&gt;
&lt;span id="cb44-27"&gt;inputs[&lt;span&gt;"pixel_values"&lt;/span&gt;].shape&lt;/span&gt;
&lt;span id="cb44-28"&gt;&lt;/span&gt;
&lt;span id="cb44-29"&gt;&lt;span&gt;# numpy로 변환하고 (1, 3, 224, 224)에서 (224, 224, 3) 형태로 변경&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-30"&gt;image_inputs &lt;span&gt;=&lt;/span&gt; inputs[&lt;span&gt;"pixel_values"&lt;/span&gt;][&lt;span&gt;0&lt;/span&gt;].detach().cpu().numpy()&lt;/span&gt;
&lt;span id="cb44-31"&gt;image_inputs &lt;span&gt;=&lt;/span&gt; np.einsum(&lt;span&gt;"ijk-&amp;gt;kji"&lt;/span&gt;, image_inputs)&lt;/span&gt;
&lt;span id="cb44-32"&gt;image_inputs &lt;span&gt;=&lt;/span&gt; np.einsum(&lt;span&gt;"ijk-&amp;gt;jik"&lt;/span&gt;, image_inputs)&lt;/span&gt;
&lt;span id="cb44-33"&gt;&lt;/span&gt;
&lt;span id="cb44-34"&gt;&lt;span&gt;# RGB 값을 나타내기 위해 이미지 입력을 0-255로 스케일링&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-35"&gt;scaler &lt;span&gt;=&lt;/span&gt; MinMaxScaler(feature_range&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;255&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb44-36"&gt;image_inputs &lt;span&gt;=&lt;/span&gt; scaler.fit_transform(&lt;/span&gt;
&lt;span id="cb44-37"&gt; image_inputs.reshape(&lt;span&gt;-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;, image_inputs.shape[&lt;span&gt;-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb44-38"&gt;).reshape(image_inputs.shape)&lt;/span&gt;
&lt;span id="cb44-39"&gt;image_inputs &lt;span&gt;=&lt;/span&gt; np.array(image_inputs, dtype&lt;span&gt;=&lt;/span&gt;np.uint8)&lt;/span&gt;
&lt;span id="cb44-40"&gt;&lt;/span&gt;
&lt;span id="cb44-41"&gt;&lt;span&gt;# numpy 배열을 Image로 변환&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-42"&gt;Image.fromarray(image_inputs)&lt;/span&gt;
&lt;span id="cb44-43"&gt;&lt;/span&gt;
&lt;span id="cb44-44"&gt;&lt;span&gt;# 텍스트 전처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-45"&gt;text &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"Her vocalization was remarkably melodic"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-46"&gt;token_ids &lt;span&gt;=&lt;/span&gt; blip_processor(image, text&lt;span&gt;=&lt;/span&gt;text, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-47"&gt;token_ids &lt;span&gt;=&lt;/span&gt; token_ids.to(device, torch.float16)[&lt;span&gt;"input_ids"&lt;/span&gt;][&lt;span&gt;0&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb44-48"&gt;&lt;/span&gt;
&lt;span id="cb44-49"&gt;&lt;span&gt;# 입력 ID를 다시 토큰으로 변환&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-50"&gt;tokens &lt;span&gt;=&lt;/span&gt; blip_processor.tokenizer.convert_ids_to_tokens(token_ids)&lt;/span&gt;
&lt;span id="cb44-51"&gt;&lt;/span&gt;
&lt;span id="cb44-52"&gt;&lt;span&gt;# 공백 토큰을 밑줄로 대체&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-53"&gt;tokens &lt;span&gt;=&lt;/span&gt; [token.replace(&lt;span&gt;"Ġ"&lt;/span&gt;, &lt;span&gt;"_"&lt;/span&gt;) &lt;span&gt;for&lt;/span&gt; token &lt;span&gt;in&lt;/span&gt; tokens]&lt;/span&gt;
&lt;span id="cb44-54"&gt;&lt;/span&gt;
&lt;span id="cb44-55"&gt;&lt;span&gt;# 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-56"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;5&lt;/span&gt;, &lt;span&gt;5&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb44-57"&gt;&lt;/span&gt;
&lt;span id="cb44-58"&gt;&lt;span&gt;# 이미지 표시&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-59"&gt;plt.subplot(&lt;span&gt;2&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-60"&gt;plt.imshow(Image.fromarray(image_inputs))&lt;/span&gt;
&lt;span id="cb44-61"&gt;plt.title(&lt;span&gt;"Processed Image"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-62"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-63"&gt;&lt;/span&gt;
&lt;span id="cb44-64"&gt;&lt;span&gt;# 텍스트와 토큰 표시&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-65"&gt;plt.subplot(&lt;span&gt;2&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;, &lt;span&gt;2&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-66"&gt;plt.text(&lt;/span&gt;
&lt;span id="cb44-67"&gt; &lt;span&gt;0.5&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-68"&gt; &lt;span&gt;0.9&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-69"&gt; &lt;span&gt;f"Original Text: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;text&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-70"&gt; horizontalalignment&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"center"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-71"&gt; fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-72"&gt; wrap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb44-73"&gt;)&lt;/span&gt;
&lt;span id="cb44-74"&gt;plt.text(&lt;span&gt;0.5&lt;/span&gt;, &lt;span&gt;0.65&lt;/span&gt;, &lt;span&gt;"Tokens:"&lt;/span&gt;, horizontalalignment&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"center"&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-75"&gt;plt.text(&lt;/span&gt;
&lt;span id="cb44-76"&gt; &lt;span&gt;0.5&lt;/span&gt;, &lt;span&gt;0.2&lt;/span&gt;, &lt;span&gt;" "&lt;/span&gt;.join(tokens), horizontalalignment&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"center"&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;10&lt;/span&gt;, wrap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb44-77"&gt;)&lt;/span&gt;
&lt;span id="cb44-78"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb44-79"&gt;&lt;/span&gt;
&lt;span id="cb44-80"&gt;plt.tight_layout()&lt;/span&gt;
&lt;span id="cb44-81"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;{"model_id":"a25bfc74cfed4c89b2c3579fc125a259","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;



&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-28-output-2.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-28-output-2.png" width="512" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;2.5.2.1&lt;/span&gt; 사용 사례 1: 이미지 캡셔닝&lt;/h4&gt;

&lt;p&gt;이미지 캡셔닝은 주어진 이미지의 내용을 설명하는 텍스트를 자동으로 생성하는 작업입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb45-1"&gt;&lt;span&gt;# 이미지 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb45-2"&gt;url &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"https://upload.wikimedia.org/wikipedia/commons/7/70/Rorschach_blot_01.jpg"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb45-3"&gt;image &lt;span&gt;=&lt;/span&gt; Image.&lt;span&gt;open&lt;/span&gt;(urlopen(url)).convert(&lt;span&gt;"RGB"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb45-4"&gt;&lt;/span&gt;
&lt;span id="cb45-5"&gt;&lt;span&gt;# 캡션 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb45-6"&gt;inputs &lt;span&gt;=&lt;/span&gt; blip_processor(image, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;).to(device, torch.float16)&lt;/span&gt;
&lt;span id="cb45-7"&gt;generated_ids &lt;span&gt;=&lt;/span&gt; model.generate(&lt;span&gt;**&lt;/span&gt;inputs, max_new_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;20&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb45-8"&gt;generated_text &lt;span&gt;=&lt;/span&gt; blip_processor.batch_decode(generated_ids, skip_special_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb45-9"&gt;generated_text &lt;span&gt;=&lt;/span&gt; generated_text[&lt;span&gt;0&lt;/span&gt;].strip()&lt;/span&gt;
&lt;span id="cb45-10"&gt;&lt;/span&gt;
&lt;span id="cb45-11"&gt;&lt;span&gt;# 이미지와 생성된 텍스트 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb45-12"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;5&lt;/span&gt;, &lt;span&gt;5&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb45-13"&gt;plt.imshow(image)&lt;/span&gt;
&lt;span id="cb45-14"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb45-15"&gt;plt.title(&lt;span&gt;f"Generated text: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;generated_text&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb45-16"&gt;plt.tight_layout()&lt;/span&gt;
&lt;span id="cb45-17"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-29-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-29-output-1.png" width="490" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;2.5.2.2&lt;/span&gt; 사용 사례 2: 시각적 질의응답&lt;/h4&gt;

&lt;p&gt;시각적 질의응답은 이미지와 관련된 질문에 대해 AI가 답변을 제공하는 기술입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb46-1"&gt;&lt;span&gt;# 이미지 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-2"&gt;url &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"https://upload.wikimedia.org/wikipedia/commons/7/70/Rorschach_blot_01.jpg"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-3"&gt;image &lt;span&gt;=&lt;/span&gt; Image.&lt;span&gt;open&lt;/span&gt;(urlopen(url)).convert(&lt;span&gt;"RGB"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb46-4"&gt;&lt;/span&gt;
&lt;span id="cb46-5"&gt;&lt;span&gt;# 시각적 질문 답변&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-6"&gt;prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"Question: Write down what you see in this picture. Answer:"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-7"&gt;&lt;/span&gt;
&lt;span id="cb46-8"&gt;&lt;span&gt;# 이미지와 프롬프트 처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-9"&gt;inputs &lt;span&gt;=&lt;/span&gt; blip_processor(image, text&lt;span&gt;=&lt;/span&gt;prompt, return_tensors&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"pt"&lt;/span&gt;).to(&lt;/span&gt;
&lt;span id="cb46-10"&gt; device, torch.float16&lt;/span&gt;
&lt;span id="cb46-11"&gt;)&lt;/span&gt;
&lt;span id="cb46-12"&gt;&lt;/span&gt;
&lt;span id="cb46-13"&gt;&lt;span&gt;# 텍스트 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-14"&gt;generated_ids &lt;span&gt;=&lt;/span&gt; model.generate(&lt;span&gt;**&lt;/span&gt;inputs, max_new_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;30&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb46-15"&gt;generated_text &lt;span&gt;=&lt;/span&gt; blip_processor.batch_decode(generated_ids, skip_special_tokens&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb46-16"&gt;generated_text &lt;span&gt;=&lt;/span&gt; generated_text[&lt;span&gt;0&lt;/span&gt;].strip()&lt;/span&gt;
&lt;span id="cb46-17"&gt;&lt;/span&gt;
&lt;span id="cb46-18"&gt;&lt;span&gt;# 이미지와 생성된 텍스트 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb46-19"&gt;plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;5&lt;/span&gt;, &lt;span&gt;5&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb46-20"&gt;plt.imshow(image)&lt;/span&gt;
&lt;span id="cb46-21"&gt;plt.axis(&lt;span&gt;"off"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb46-22"&gt;plt.title(&lt;span&gt;f"&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;generated_text&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;, wrap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb46-23"&gt;plt.tight_layout()&lt;/span&gt;
&lt;span id="cb46-24"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="LLM_HansOnLLM_files/figure-html/cell-30-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FLLM_HansOnLLM_files%2Ffigure-html%2Fcell-30-output-1.png" width="513" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;이러한 다중 모달 텍스트 생성 모델의 핵심 아이디어는 입력 이미지의 시각적 특징을 LLM이 사용할 수 있는 텍스트 임베딩으로 투영하는 것입니다. 이 모델을 이미지 캡셔닝과 다중 모달 채팅 기반 프롬프팅에 사용하는 방법을 보았는데, 여기서는 두 가지 양식을 결합하여 응답을 생성합니다.&lt;/p&gt;


&lt;br&gt;
&lt;br&gt;


&lt;h1&gt;
&lt;span&gt;3&lt;/span&gt; 언어 모델 훈련 및 미세 조정&lt;/h1&gt;


&lt;h2&gt;
&lt;span&gt;3.1&lt;/span&gt; 텍스트 임베딩 모델 생성&lt;/h2&gt;
&lt;p&gt;텍스트 임베딩 모델은 많은 강력한 자연어 처리 애플리케이션의 기초를 이룹니다. 이들은 텍스트 생성 모델과 같은 이미 인상적인 기술들을 더욱 강화하는 기반을 마련합니다. 임베딩 모델을 생성하는 방법은 여러 가지가 있지만, 일반적으로 우리는 대조 학습을 주목합니다. 이는 많은 임베딩 모델의 중요한 측면인데, 이 과정을 통해 모델이 의미론적 표현을 효율적으로 학습할 수 있기 때문입니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;3.1.1&lt;/span&gt; 대조 생성(Generating Contrastive) 예제&lt;/h3&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb47-1"&gt;&lt;span&gt;import&lt;/span&gt; random&lt;/span&gt;
&lt;span id="cb47-2"&gt;&lt;span&gt;from&lt;/span&gt; tqdm &lt;span&gt;import&lt;/span&gt; tqdm&lt;/span&gt;
&lt;span id="cb47-3"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset, Dataset&lt;/span&gt;
&lt;span id="cb47-4"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers &lt;span&gt;import&lt;/span&gt; SentenceTransformer, losses&lt;/span&gt;
&lt;span id="cb47-5"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.evaluation &lt;span&gt;import&lt;/span&gt; EmbeddingSimilarityEvaluator&lt;/span&gt;
&lt;span id="cb47-6"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.training_args &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainingArguments&lt;/span&gt;
&lt;span id="cb47-7"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.trainer &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainer&lt;/span&gt;
&lt;span id="cb47-8"&gt;&lt;/span&gt;
&lt;span id="cb47-9"&gt;&lt;span&gt;# GLUE에서 MNLI 데이터셋 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-10"&gt;mnli &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"mnli"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"train"&lt;/span&gt;).select(&lt;span&gt;range&lt;/span&gt;(&lt;span&gt;50_000&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb47-11"&gt;mnli &lt;span&gt;=&lt;/span&gt; mnli.remove_columns(&lt;span&gt;"idx"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb47-12"&gt;mnli &lt;span&gt;=&lt;/span&gt; mnli.&lt;span&gt;filter&lt;/span&gt;(&lt;span&gt;lambda&lt;/span&gt; x: &lt;span&gt;True&lt;/span&gt; &lt;span&gt;if&lt;/span&gt; x[&lt;span&gt;"label"&lt;/span&gt;] &lt;span&gt;==&lt;/span&gt; &lt;span&gt;0&lt;/span&gt; &lt;span&gt;else&lt;/span&gt; &lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb47-13"&gt;&lt;/span&gt;
&lt;span id="cb47-14"&gt;&lt;span&gt;# 데이터 전처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-15"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; {&lt;span&gt;"anchor"&lt;/span&gt;: [], &lt;span&gt;"positive"&lt;/span&gt;: [], &lt;span&gt;"negative"&lt;/span&gt;: []}&lt;/span&gt;
&lt;span id="cb47-16"&gt;soft_negatives &lt;span&gt;=&lt;/span&gt; mnli[&lt;span&gt;"hypothesis"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb47-17"&gt;random.shuffle(soft_negatives)&lt;/span&gt;
&lt;span id="cb47-18"&gt;&lt;span&gt;for&lt;/span&gt; row, soft_negative &lt;span&gt;in&lt;/span&gt; tqdm(&lt;span&gt;zip&lt;/span&gt;(mnli, soft_negatives)):&lt;/span&gt;
&lt;span id="cb47-19"&gt; train_dataset[&lt;span&gt;"anchor"&lt;/span&gt;].append(row[&lt;span&gt;"premise"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb47-20"&gt; train_dataset[&lt;span&gt;"positive"&lt;/span&gt;].append(row[&lt;span&gt;"hypothesis"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb47-21"&gt; train_dataset[&lt;span&gt;"negative"&lt;/span&gt;].append(soft_negative)&lt;/span&gt;
&lt;span id="cb47-22"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; Dataset.from_dict(train_dataset)&lt;/span&gt;
&lt;span id="cb47-23"&gt;&lt;/span&gt;
&lt;span id="cb47-24"&gt;&lt;span&gt;# 모델&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-25"&gt;embedding_model &lt;span&gt;=&lt;/span&gt; SentenceTransformer(&lt;span&gt;"bert-base-uncased"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb47-26"&gt;&lt;/span&gt;
&lt;span id="cb47-27"&gt;&lt;span&gt;# 손실 함수 정의. 소프트맥스 손실에서는 레이블 수를 명시적으로 설정해야 함.&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-28"&gt;train_loss &lt;span&gt;=&lt;/span&gt; losses.MultipleNegativesRankingLoss(model&lt;span&gt;=&lt;/span&gt;embedding_model)&lt;/span&gt;
&lt;span id="cb47-29"&gt;&lt;/span&gt;
&lt;span id="cb47-30"&gt;&lt;span&gt;# 평가 함수 및 stsb를 위한 임베딩 유사도 평가기 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-31"&gt;val_sts &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"stsb"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"validation"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb47-32"&gt;evaluator &lt;span&gt;=&lt;/span&gt; EmbeddingSimilarityEvaluator(&lt;/span&gt;
&lt;span id="cb47-33"&gt; sentences1&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence1"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb47-34"&gt; sentences2&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence2"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb47-35"&gt; scores&lt;span&gt;=&lt;/span&gt;[score &lt;span&gt;/&lt;/span&gt; &lt;span&gt;5&lt;/span&gt; &lt;span&gt;for&lt;/span&gt; score &lt;span&gt;in&lt;/span&gt; val_sts[&lt;span&gt;"label"&lt;/span&gt;]],&lt;/span&gt;
&lt;span id="cb47-36"&gt; main_similarity&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-37"&gt;)&lt;/span&gt;
&lt;span id="cb47-38"&gt;&lt;/span&gt;
&lt;span id="cb47-39"&gt;&lt;span&gt;# 훈련 인자 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-40"&gt;args &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainingArguments(&lt;/span&gt;
&lt;span id="cb47-41"&gt; output_dir&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"mnrloss_embedding_model"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-42"&gt; num_train_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-43"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-44"&gt; per_device_eval_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-45"&gt; warmup_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-46"&gt; fp16&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-47"&gt; eval_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-48"&gt; logging_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb47-49"&gt;)&lt;/span&gt;
&lt;span id="cb47-50"&gt;&lt;/span&gt;
&lt;span id="cb47-51"&gt;&lt;span&gt;# 임베딩 모델 훈련&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-52"&gt;trainer &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainer(&lt;/span&gt;
&lt;span id="cb47-53"&gt; model&lt;span&gt;=&lt;/span&gt;embedding_model,&lt;/span&gt;
&lt;span id="cb47-54"&gt; args&lt;span&gt;=&lt;/span&gt;args,&lt;/span&gt;
&lt;span id="cb47-55"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;train_dataset,&lt;/span&gt;
&lt;span id="cb47-56"&gt; loss&lt;span&gt;=&lt;/span&gt;train_loss,&lt;/span&gt;
&lt;span id="cb47-57"&gt; evaluator&lt;span&gt;=&lt;/span&gt;evaluator,&lt;/span&gt;
&lt;span id="cb47-58"&gt;)&lt;/span&gt;
&lt;span id="cb47-59"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb47-60"&gt;&lt;/span&gt;
&lt;span id="cb47-61"&gt;&lt;span&gt;# 훈련된 모델 평가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb47-62"&gt;evaluator(embedding_model)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;16875it [00:00, 42644.51it/s]
No sentence-transformers model found with name bert-base-uncased. Creating a new one with mean pooling.&lt;/code&gt;&lt;/pre&gt;



    
      
      
      [528/528 00:32, Epoch 1/1]
    
    
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;0.346900&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;0.107100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;0.083700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;0.068100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;0.072500&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;p&gt;{"model_id":"89f595417aa44ce5b88220d312bf2161","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{'pearson_cosine': np.float64(0.8058287434682441),
 'spearman_cosine': np.float64(0.8093139517546301)}&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;3.1.2&lt;/span&gt; 임베딩 모델의 미세 조정&lt;/h3&gt;

&lt;h4&gt;
&lt;span&gt;3.1.2.1&lt;/span&gt; 지도 학습 기반 미세 조정 (Supervised Fine-Tuning, SFT)&lt;/h4&gt;

&lt;p&gt;지도 학습 기반 미세 조정(SFT)은 사전 훈련된 임베딩 모델을 특정 작업이나 도메인에 맞게 조정하는 프로세스입니다. 이 방법은 레이블이 지정된 데이터셋을 사용하여 모델의 성능을 향상시키고 특정 용도에 더 적합하게 만듭니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb50-1"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb50-2"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.evaluation &lt;span&gt;import&lt;/span&gt; EmbeddingSimilarityEvaluator&lt;/span&gt;
&lt;span id="cb50-3"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers &lt;span&gt;import&lt;/span&gt; losses, SentenceTransformer&lt;/span&gt;
&lt;span id="cb50-4"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.trainer &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainer&lt;/span&gt;
&lt;span id="cb50-5"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.training_args &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainingArguments&lt;/span&gt;
&lt;span id="cb50-6"&gt;&lt;/span&gt;
&lt;span id="cb50-7"&gt;&lt;span&gt;# GLUE에서 MNLI 데이터셋 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-8"&gt;&lt;span&gt;# 0 = 함의, 1 = 중립, 2 = 모순&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-9"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"mnli"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"train"&lt;/span&gt;).select(&lt;span&gt;range&lt;/span&gt;(&lt;span&gt;25_000&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb50-10"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; train_dataset.remove_columns(&lt;span&gt;"idx"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb50-11"&gt;&lt;/span&gt;
&lt;span id="cb50-12"&gt;&lt;span&gt;# stsb를 위한 임베딩 유사도 평가기 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-13"&gt;val_sts &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"stsb"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"validation"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb50-14"&gt;evaluator &lt;span&gt;=&lt;/span&gt; EmbeddingSimilarityEvaluator(&lt;/span&gt;
&lt;span id="cb50-15"&gt; sentences1&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence1"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb50-16"&gt; sentences2&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence2"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb50-17"&gt; scores&lt;span&gt;=&lt;/span&gt;[score &lt;span&gt;/&lt;/span&gt; &lt;span&gt;5&lt;/span&gt; &lt;span&gt;for&lt;/span&gt; score &lt;span&gt;in&lt;/span&gt; val_sts[&lt;span&gt;"label"&lt;/span&gt;]],&lt;/span&gt;
&lt;span id="cb50-18"&gt; main_similarity&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-19"&gt;)&lt;/span&gt;
&lt;span id="cb50-20"&gt;&lt;/span&gt;
&lt;span id="cb50-21"&gt;&lt;span&gt;# 모델 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-22"&gt;embedding_model &lt;span&gt;=&lt;/span&gt; SentenceTransformer(&lt;span&gt;"sentence-transformers/all-MiniLM-L6-v2"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb50-23"&gt;&lt;/span&gt;
&lt;span id="cb50-24"&gt;&lt;span&gt;# 손실 함수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-25"&gt;train_loss &lt;span&gt;=&lt;/span&gt; losses.MultipleNegativesRankingLoss(model&lt;span&gt;=&lt;/span&gt;embedding_model)&lt;/span&gt;
&lt;span id="cb50-26"&gt;&lt;/span&gt;
&lt;span id="cb50-27"&gt;&lt;span&gt;# 훈련 인자 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-28"&gt;args &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainingArguments(&lt;/span&gt;
&lt;span id="cb50-29"&gt; output_dir&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"finetuned_embedding_model"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-30"&gt; num_train_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-31"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-32"&gt; per_device_eval_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-33"&gt; warmup_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-34"&gt; fp16&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-35"&gt; eval_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-36"&gt; logging_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb50-37"&gt;)&lt;/span&gt;
&lt;span id="cb50-38"&gt;&lt;/span&gt;
&lt;span id="cb50-39"&gt;&lt;span&gt;# 모델 훈련&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-40"&gt;trainer &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainer(&lt;/span&gt;
&lt;span id="cb50-41"&gt; model&lt;span&gt;=&lt;/span&gt;embedding_model,&lt;/span&gt;
&lt;span id="cb50-42"&gt; args&lt;span&gt;=&lt;/span&gt;args,&lt;/span&gt;
&lt;span id="cb50-43"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;train_dataset,&lt;/span&gt;
&lt;span id="cb50-44"&gt; loss&lt;span&gt;=&lt;/span&gt;train_loss,&lt;/span&gt;
&lt;span id="cb50-45"&gt; evaluator&lt;span&gt;=&lt;/span&gt;evaluator,&lt;/span&gt;
&lt;span id="cb50-46"&gt;)&lt;/span&gt;
&lt;span id="cb50-47"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb50-48"&gt;&lt;/span&gt;
&lt;span id="cb50-49"&gt;&lt;span&gt;# 훈련된 모델 평가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb50-50"&gt;evaluator(embedding_model)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;Column 'hypothesis' is at index 1, whereas a column with this name is usually expected at index 0. Note that the column order can be important for some losses, e.g. MultipleNegativesRankingLoss will always consider the first column as the anchor and the second as the positive, regardless of the dataset column names. Consider renaming the columns to match the expected order, e.g.:
dataset = dataset.select_columns(['hypothesis', 'entailment', 'contradiction'])&lt;/code&gt;&lt;/pre&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [782/782 00:20, Epoch 1/1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;0.127500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;0.126100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;0.108700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;0.117500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;0.115400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;600&lt;/td&gt;
&lt;td&gt;0.105800&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;700&lt;/td&gt;
&lt;td&gt;0.106100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;p&gt;{"model_id":"6c25295376e34b8a947275510261363d","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{'pearson_cosine': np.float64(0.850360102427649),
 'spearman_cosine': np.float64(0.8505789375274108)}&lt;/code&gt;&lt;/pre&gt;



&lt;h3&gt;
&lt;span&gt;3.1.3&lt;/span&gt; 비지도 학습&lt;/h3&gt;

&lt;p&gt;현실 세계의 데이터셋에는 우리가 사용할 수 있는 좋은 레이블 세트가 함께 제공되지 않습니다. 대신 미리 정해진 레이블 없이 모델을 훈련시키는 기법을 찾아야 합니다. 이것을 비지도 학습이라 부릅니다. 여기에는 여러 가지 방식이 존재합니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;3.1.3.1&lt;/span&gt; 트랜스포머 기반 순차적 디노이징 오토인코더&lt;/h4&gt;

&lt;p&gt;TSDAE는 비지도 학습으로 임베딩 모델을 만드는 매우 우아한 접근 방식입니다. 이 방법은 우리가 전혀 레이블이 지정된 데이터를 가지고 있지 않다고 가정하며, 인위적으로 레이블을 만들 필요가 없습니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb53-1"&gt;&lt;span&gt;import&lt;/span&gt; nltk&lt;/span&gt;
&lt;span id="cb53-2"&gt;&lt;span&gt;from&lt;/span&gt; tqdm &lt;span&gt;import&lt;/span&gt; tqdm&lt;/span&gt;
&lt;span id="cb53-3"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; Dataset, load_dataset&lt;/span&gt;
&lt;span id="cb53-4"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.datasets &lt;span&gt;import&lt;/span&gt; DenoisingAutoEncoderDataset&lt;/span&gt;
&lt;span id="cb53-5"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.evaluation &lt;span&gt;import&lt;/span&gt; EmbeddingSimilarityEvaluator&lt;/span&gt;
&lt;span id="cb53-6"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers &lt;span&gt;import&lt;/span&gt; models, SentenceTransformer, losses&lt;/span&gt;
&lt;span id="cb53-7"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.trainer &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainer&lt;/span&gt;
&lt;span id="cb53-8"&gt;&lt;span&gt;from&lt;/span&gt; sentence_transformers.training_args &lt;span&gt;import&lt;/span&gt; SentenceTransformerTrainingArguments&lt;/span&gt;
&lt;span id="cb53-9"&gt;&lt;/span&gt;
&lt;span id="cb53-10"&gt;&lt;span&gt;# 추가 토크나이저 다운로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-11"&gt;nltk.download(&lt;span&gt;"punkt"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-12"&gt;nltk.download(&lt;span&gt;"punkt_tab"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-13"&gt;&lt;/span&gt;
&lt;span id="cb53-14"&gt;&lt;span&gt;# 문장의 평면 리스트 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-15"&gt;mnli &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"mnli"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"train"&lt;/span&gt;).select(&lt;span&gt;range&lt;/span&gt;(&lt;span&gt;25_000&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb53-16"&gt;flat_sentences &lt;span&gt;=&lt;/span&gt; mnli[&lt;span&gt;"premise"&lt;/span&gt;] &lt;span&gt;+&lt;/span&gt; mnli[&lt;span&gt;"hypothesis"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb53-17"&gt;&lt;/span&gt;
&lt;span id="cb53-18"&gt;&lt;span&gt;# 입력 데이터에 노이즈 추가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-19"&gt;damaged_data &lt;span&gt;=&lt;/span&gt; DenoisingAutoEncoderDataset(&lt;span&gt;list&lt;/span&gt;(&lt;span&gt;set&lt;/span&gt;(flat_sentences)))&lt;/span&gt;
&lt;span id="cb53-20"&gt;&lt;/span&gt;
&lt;span id="cb53-21"&gt;&lt;span&gt;# 데이터셋 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-22"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; {&lt;span&gt;"damaged_sentence"&lt;/span&gt;: [], &lt;span&gt;"original_sentence"&lt;/span&gt;: []}&lt;/span&gt;
&lt;span id="cb53-23"&gt;&lt;span&gt;for&lt;/span&gt; data &lt;span&gt;in&lt;/span&gt; tqdm(damaged_data):&lt;/span&gt;
&lt;span id="cb53-24"&gt; train_dataset[&lt;span&gt;"damaged_sentence"&lt;/span&gt;].append(data.texts[&lt;span&gt;0&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb53-25"&gt; train_dataset[&lt;span&gt;"original_sentence"&lt;/span&gt;].append(data.texts[&lt;span&gt;1&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb53-26"&gt;train_dataset &lt;span&gt;=&lt;/span&gt; Dataset.from_dict(train_dataset)&lt;/span&gt;
&lt;span id="cb53-27"&gt;&lt;/span&gt;
&lt;span id="cb53-28"&gt;&lt;span&gt;# stsb를 위한 임베딩 유사도 평가기 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-29"&gt;val_sts &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"glue"&lt;/span&gt;, &lt;span&gt;"stsb"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"validation"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-30"&gt;evaluator &lt;span&gt;=&lt;/span&gt; EmbeddingSimilarityEvaluator(&lt;/span&gt;
&lt;span id="cb53-31"&gt; sentences1&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence1"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb53-32"&gt; sentences2&lt;span&gt;=&lt;/span&gt;val_sts[&lt;span&gt;"sentence2"&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb53-33"&gt; scores&lt;span&gt;=&lt;/span&gt;[score &lt;span&gt;/&lt;/span&gt; &lt;span&gt;5&lt;/span&gt; &lt;span&gt;for&lt;/span&gt; score &lt;span&gt;in&lt;/span&gt; val_sts[&lt;span&gt;"label"&lt;/span&gt;]],&lt;/span&gt;
&lt;span id="cb53-34"&gt; main_similarity&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-35"&gt;)&lt;/span&gt;
&lt;span id="cb53-36"&gt;&lt;/span&gt;
&lt;span id="cb53-37"&gt;&lt;span&gt;# 임베딩 모델 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-38"&gt;word_embedding_model &lt;span&gt;=&lt;/span&gt; models.Transformer(&lt;span&gt;"bert-base-uncased"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-39"&gt;pooling_model &lt;span&gt;=&lt;/span&gt; models.Pooling(&lt;/span&gt;
&lt;span id="cb53-40"&gt; word_embedding_model.get_word_embedding_dimension(), &lt;span&gt;"cls"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-41"&gt;)&lt;/span&gt;
&lt;span id="cb53-42"&gt;embedding_model &lt;span&gt;=&lt;/span&gt; SentenceTransformer(modules&lt;span&gt;=&lt;/span&gt;[word_embedding_model, pooling_model])&lt;/span&gt;
&lt;span id="cb53-43"&gt;&lt;/span&gt;
&lt;span id="cb53-44"&gt;&lt;span&gt;# 디노이징 오토인코더 손실 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-45"&gt;train_loss &lt;span&gt;=&lt;/span&gt; losses.DenoisingAutoEncoderLoss(embedding_model, tie_encoder_decoder&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-46"&gt;train_loss.decoder &lt;span&gt;=&lt;/span&gt; train_loss.decoder.to(&lt;span&gt;"cuda"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb53-47"&gt;&lt;/span&gt;
&lt;span id="cb53-48"&gt;&lt;span&gt;# 훈련 인자 정의&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-49"&gt;args &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainingArguments(&lt;/span&gt;
&lt;span id="cb53-50"&gt; output_dir&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"tsdae_embedding_model"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-51"&gt; num_train_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-52"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;16&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-53"&gt; per_device_eval_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;16&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-54"&gt; warmup_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-55"&gt; fp16&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-56"&gt; eval_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-57"&gt; logging_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1000&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-58"&gt; disable_tqdm&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb53-59"&gt;)&lt;/span&gt;
&lt;span id="cb53-60"&gt;&lt;/span&gt;
&lt;span id="cb53-61"&gt;&lt;span&gt;# 모델 훈련&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-62"&gt;trainer &lt;span&gt;=&lt;/span&gt; SentenceTransformerTrainer(&lt;/span&gt;
&lt;span id="cb53-63"&gt; model&lt;span&gt;=&lt;/span&gt;embedding_model,&lt;/span&gt;
&lt;span id="cb53-64"&gt; args&lt;span&gt;=&lt;/span&gt;args,&lt;/span&gt;
&lt;span id="cb53-65"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;train_dataset,&lt;/span&gt;
&lt;span id="cb53-66"&gt; loss&lt;span&gt;=&lt;/span&gt;train_loss,&lt;/span&gt;
&lt;span id="cb53-67"&gt; evaluator&lt;span&gt;=&lt;/span&gt;evaluator,&lt;/span&gt;
&lt;span id="cb53-68"&gt;)&lt;/span&gt;
&lt;span id="cb53-69"&gt;&lt;/span&gt;
&lt;span id="cb53-70"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb53-71"&gt;&lt;/span&gt;
&lt;span id="cb53-72"&gt;&lt;span&gt;# 훈련된 모델 평가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb53-73"&gt;evaluator(embedding_model)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;[nltk_data] Downloading package punkt to /home/fkt/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package punkt_tab to /home/fkt/nltk_data...
[nltk_data] Package punkt_tab is already up-to-date!
100%|█████████████████████| 48353/48353 [00:03&amp;lt;00:00, 15391.05it/s]&lt;/code&gt;&lt;/pre&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [3023/3023 02:42, Epoch 1/1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;4.637300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2000&lt;/td&gt;
&lt;td&gt;3.883400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3000&lt;/td&gt;
&lt;td&gt;3.647900&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;p&gt;{"model_id":"ceec1d3857784c0aac1eb569554d70d8","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{'pearson_cosine': np.float64(0.7401165281596465),
 'spearman_cosine': np.float64(0.7469963144425136)}&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb56-1"&gt;&lt;span&gt;# VRAM clean up&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb56-2"&gt;&lt;span&gt;import&lt;/span&gt; gc&lt;/span&gt;
&lt;span id="cb56-3"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb56-4"&gt;&lt;/span&gt;
&lt;span id="cb56-5"&gt;gc.collect()&lt;/span&gt;
&lt;span id="cb56-6"&gt;torch.cuda.empty_cache()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;br&gt;



&lt;h2&gt;
&lt;span&gt;3.2&lt;/span&gt; 분류를 위한 표현 모델 미세 조정&lt;/h2&gt;
&lt;p&gt;BERT 모델을 미세 조정하는 여러 방법과 응용 사례를 살펴보겠습니다.&lt;/p&gt;

&lt;h3&gt;
&lt;span&gt;3.2.1&lt;/span&gt; 지도 학습 분류&lt;/h3&gt;
&lt;p&gt;사전 학습된 BERT 모델 미세 조정하기위해 앞서 사용했던 것과 동일한 Rotten Tomatoes 데이터셋을 활용하겠습니다. 영어 위키피디아와 미출판 도서들로 구성된 대규모 데이터셋으로 사전 학습된 “bert-base-cased” 모델을 사용할 것입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb57-1"&gt;&lt;span&gt;import&lt;/span&gt; warnings&lt;/span&gt;
&lt;span id="cb57-2"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb57-3"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb57-4"&gt; AutoTokenizer,&lt;/span&gt;
&lt;span id="cb57-5"&gt; AutoModelForSequenceClassification,&lt;/span&gt;
&lt;span id="cb57-6"&gt; DataCollatorWithPadding,&lt;/span&gt;
&lt;span id="cb57-7"&gt; TrainingArguments,&lt;/span&gt;
&lt;span id="cb57-8"&gt; Trainer,&lt;/span&gt;
&lt;span id="cb57-9"&gt; logging,&lt;/span&gt;
&lt;span id="cb57-10"&gt;)&lt;/span&gt;
&lt;span id="cb57-11"&gt;&lt;span&gt;import&lt;/span&gt; numpy &lt;span&gt;as&lt;/span&gt; np&lt;/span&gt;
&lt;span id="cb57-12"&gt;&lt;span&gt;import&lt;/span&gt; evaluate&lt;/span&gt;
&lt;span id="cb57-13"&gt;&lt;/span&gt;
&lt;span id="cb57-14"&gt;logging.set_verbosity_error() &lt;span&gt;# 경고 메시지가 표시되지 않고 오류 메시지만 표시&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-15"&gt;warnings.filterwarnings(&lt;span&gt;"ignore"&lt;/span&gt;) &lt;span&gt;# 경고 메시지 끄기&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-16"&gt;&lt;/span&gt;
&lt;span id="cb57-17"&gt;&lt;span&gt;# 데이터 준비 및 분할&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-18"&gt;tomatoes &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"rotten_tomatoes"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-19"&gt;train_data, test_data &lt;span&gt;=&lt;/span&gt; tomatoes[&lt;span&gt;"train"&lt;/span&gt;], tomatoes[&lt;span&gt;"test"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb57-20"&gt;&lt;/span&gt;
&lt;span id="cb57-21"&gt;&lt;span&gt;# 모델 및 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-22"&gt;model_id &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"bert-base-cased"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-23"&gt;model &lt;span&gt;=&lt;/span&gt; AutoModelForSequenceClassification.from_pretrained(model_id, num_labels&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-24"&gt;tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(model_id)&lt;/span&gt;
&lt;span id="cb57-25"&gt;&lt;/span&gt;
&lt;span id="cb57-26"&gt;&lt;span&gt;# 배치 내 가장 긴 시퀀스에 맞춰 패딩&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-27"&gt;data_collator &lt;span&gt;=&lt;/span&gt; DataCollatorWithPadding(tokenizer&lt;span&gt;=&lt;/span&gt;tokenizer)&lt;/span&gt;
&lt;span id="cb57-28"&gt;&lt;/span&gt;
&lt;span id="cb57-29"&gt;&lt;/span&gt;
&lt;span id="cb57-30"&gt;&lt;span&gt;def&lt;/span&gt; preprocess_function(examples):&lt;/span&gt;
&lt;span id="cb57-31"&gt; &lt;span&gt;"""입력 데이터 토큰화"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-32"&gt; &lt;span&gt;return&lt;/span&gt; tokenizer(examples[&lt;span&gt;"text"&lt;/span&gt;], truncation&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-33"&gt;&lt;/span&gt;
&lt;span id="cb57-34"&gt;&lt;/span&gt;
&lt;span id="cb57-35"&gt;&lt;span&gt;# 학습/테스트 데이터 토큰화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-36"&gt;tokenized_train &lt;span&gt;=&lt;/span&gt; train_data.&lt;span&gt;map&lt;/span&gt;(preprocess_function, batched&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-37"&gt;tokenized_test &lt;span&gt;=&lt;/span&gt; test_data.&lt;span&gt;map&lt;/span&gt;(preprocess_function, batched&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-38"&gt;&lt;/span&gt;
&lt;span id="cb57-39"&gt;&lt;/span&gt;
&lt;span id="cb57-40"&gt;&lt;span&gt;def&lt;/span&gt; compute_metrics(eval_pred):&lt;/span&gt;
&lt;span id="cb57-41"&gt; &lt;span&gt;"""Calculate F1 score"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-42"&gt; logits, labels &lt;span&gt;=&lt;/span&gt; eval_pred&lt;/span&gt;
&lt;span id="cb57-43"&gt; predictions &lt;span&gt;=&lt;/span&gt; np.argmax(logits, axis&lt;span&gt;=-&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-44"&gt; load_f1 &lt;span&gt;=&lt;/span&gt; evaluate.load(&lt;span&gt;"f1"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb57-45"&gt; f1 &lt;span&gt;=&lt;/span&gt; load_f1.compute(predictions&lt;span&gt;=&lt;/span&gt;predictions, references&lt;span&gt;=&lt;/span&gt;labels)[&lt;span&gt;"f1"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb57-46"&gt; &lt;span&gt;return&lt;/span&gt; {&lt;span&gt;"f1"&lt;/span&gt;: f1}&lt;/span&gt;
&lt;span id="cb57-47"&gt;&lt;/span&gt;
&lt;span id="cb57-48"&gt;&lt;/span&gt;
&lt;span id="cb57-49"&gt;&lt;span&gt;# 매개변수 튜닝을 위한 학습 인자&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-50"&gt;training_args &lt;span&gt;=&lt;/span&gt; TrainingArguments(&lt;/span&gt;
&lt;span id="cb57-51"&gt; &lt;span&gt;"model"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-52"&gt; learning_rate&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2e-5&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-53"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;16&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-54"&gt; per_device_eval_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;16&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-55"&gt; num_train_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;10&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-56"&gt; weight_decay&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.01&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-57"&gt; save_strategy&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"epoch"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-58"&gt; report_to&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"none"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-59"&gt; disable_tqdm&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb57-60"&gt;)&lt;/span&gt;
&lt;span id="cb57-61"&gt;&lt;/span&gt;
&lt;span id="cb57-62"&gt;&lt;span&gt;# 학습 과정을 실행하는 Trainer&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-63"&gt;trainer &lt;span&gt;=&lt;/span&gt; Trainer(&lt;/span&gt;
&lt;span id="cb57-64"&gt; model&lt;span&gt;=&lt;/span&gt;model,&lt;/span&gt;
&lt;span id="cb57-65"&gt; args&lt;span&gt;=&lt;/span&gt;training_args,&lt;/span&gt;
&lt;span id="cb57-66"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;tokenized_train,&lt;/span&gt;
&lt;span id="cb57-67"&gt; eval_dataset&lt;span&gt;=&lt;/span&gt;tokenized_test,&lt;/span&gt;
&lt;span id="cb57-68"&gt; processing_class&lt;span&gt;=&lt;/span&gt;tokenizer,&lt;/span&gt;
&lt;span id="cb57-69"&gt; data_collator&lt;span&gt;=&lt;/span&gt;data_collator,&lt;/span&gt;
&lt;span id="cb57-70"&gt; compute_metrics&lt;span&gt;=&lt;/span&gt;compute_metrics,&lt;/span&gt;
&lt;span id="cb57-71"&gt;)&lt;/span&gt;
&lt;span id="cb57-72"&gt;&lt;/span&gt;
&lt;span id="cb57-73"&gt;&lt;span&gt;# 모델 학습&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-74"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb57-75"&gt;&lt;/span&gt;
&lt;span id="cb57-76"&gt;&lt;span&gt;# 결과 평가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb57-77"&gt;trainer.evaluate()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


    
      
      
      [5340/5340 02:46, Epoch 10/10]
    
    
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;0.418000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;0.234400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;td&gt;0.137900&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2000&lt;/td&gt;
&lt;td&gt;0.072600&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2500&lt;/td&gt;
&lt;td&gt;0.038000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3000&lt;/td&gt;
&lt;td&gt;0.032400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3500&lt;/td&gt;
&lt;td&gt;0.023500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4000&lt;/td&gt;
&lt;td&gt;0.007000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4500&lt;/td&gt;
&lt;td&gt;0.010100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5000&lt;/td&gt;
&lt;td&gt;0.004100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [67/67 00:00]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;pre&gt;&lt;code&gt;{'eval_loss': 1.279144048690796,
 'eval_f1': 0.8457899716177862,
 'eval_runtime': 1.4511,
 'eval_samples_per_second': 734.612,
 'eval_steps_per_second': 46.172,
 'epoch': 10.0}&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;
&lt;span&gt;3.2.2&lt;/span&gt; 적은 샷(Few shot) 분류&lt;/h3&gt;

&lt;p&gt;적은 샷 분류는 지도 학습 분류의 한 기법으로, 소수의 레이블된 예시만을 바탕으로 분류기가 목표 레이블을 학습하는 방법입니다. 이 기법은 분류 작업이 필요하지만 충분한 레이블된 데이터를 즉시 사용할 수 없을 때 유용합니다. 다시 말해, 이 방법을 통해 각 클래스당 소수의 고품질 데이터 포인트만 레이블링하여 모델을 훈련시킬 수 있습니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;3.2.2.1&lt;/span&gt; SetFit: 적은 훈련 예시로 효율적인 미세 조정&lt;/h4&gt;

&lt;p&gt;적은 샷 텍스트 분류를 수행하기 위해 SetFit이라는 효율적인 프레임워크를 사용합니다. 이는 문장 트랜스포머의 구조를 기반으로 하여 훈련 중 업데이트되는 고품질 텍스트 표현을 생성합니다. SetFit은 다음 세 단계로 구성됩니다: 1. 훈련 데이터 샘플링: 클래스 내부와 외부 선택을 기반으로 긍정적(유사한) 및 부정적(다른) 문장 쌍을 생성합니다. 2. 임베딩 미세 조정: 이전에 생성된 훈련 데이터를 바탕으로 사전 학습된 임베딩 모델을 미세 조정합니다. 3. 분류기 훈련: 임베딩 모델 위에 분류 헤드를 만들고 이전에 생성된 훈련 데이터를 사용하여 훈련시킵니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;3.2.2.2&lt;/span&gt; 적은 샷 분류를 위한 미세 조정&lt;/h4&gt;

&lt;p&gt;이전에는 약 8,500개의 영화 리뷰를 포함한 데이터셋으로 훈련했습니다. 하지만 이번에는 적은 샷 설정이므로 각 클래스당 16개의 예시만 샘플링할 것입니다. 두 개의 클래스가 있으므로 이전에 사용했던 8,500개의 영화 리뷰와 비교해 단 32개의 문서로만 훈련하게 됩니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb59-1"&gt;&lt;span&gt;from&lt;/span&gt; setfit &lt;span&gt;import&lt;/span&gt; sample_dataset, SetFitModel&lt;/span&gt;
&lt;span id="cb59-2"&gt;&lt;span&gt;from&lt;/span&gt; setfit &lt;span&gt;import&lt;/span&gt; TrainingArguments &lt;span&gt;as&lt;/span&gt; SetFitTrainingArguments&lt;/span&gt;
&lt;span id="cb59-3"&gt;&lt;span&gt;from&lt;/span&gt; setfit &lt;span&gt;import&lt;/span&gt; Trainer &lt;span&gt;as&lt;/span&gt; SetFitTrainer&lt;/span&gt;
&lt;span id="cb59-4"&gt;&lt;/span&gt;
&lt;span id="cb59-5"&gt;&lt;span&gt;# 클래스당 16개의 예시를 샘플링하여 few-shot 설정을 시뮬레이션합니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-6"&gt;sampled_train_data &lt;span&gt;=&lt;/span&gt; sample_dataset(tomatoes[&lt;span&gt;"train"&lt;/span&gt;], num_samples&lt;span&gt;=&lt;/span&gt;&lt;span&gt;16&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb59-7"&gt;&lt;/span&gt;
&lt;span id="cb59-8"&gt;&lt;span&gt;# 사전 훈련된 SentenceTransformer 모델을 로드합니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-9"&gt;model &lt;span&gt;=&lt;/span&gt; SetFitModel.from_pretrained(&lt;span&gt;"sentence-transformers/all-mpnet-base-v2"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb59-10"&gt;&lt;/span&gt;
&lt;span id="cb59-11"&gt;&lt;span&gt;# 훈련 인자를 정의합니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-12"&gt;args &lt;span&gt;=&lt;/span&gt; SetFitTrainingArguments(&lt;/span&gt;
&lt;span id="cb59-13"&gt; num_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;3&lt;/span&gt;, &lt;span&gt;# 대조 학습에 사용할 에폭 수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-14"&gt; num_iterations&lt;span&gt;=&lt;/span&gt;&lt;span&gt;20&lt;/span&gt;, &lt;span&gt;# 생성할 텍스트 쌍의 수&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-15"&gt;)&lt;/span&gt;
&lt;span id="cb59-16"&gt;args.eval_strategy &lt;span&gt;=&lt;/span&gt; args.evaluation_strategy&lt;/span&gt;
&lt;span id="cb59-17"&gt;&lt;/span&gt;
&lt;span id="cb59-18"&gt;&lt;span&gt;# 트레이너를 생성합니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-19"&gt;trainer &lt;span&gt;=&lt;/span&gt; SetFitTrainer(&lt;/span&gt;
&lt;span id="cb59-20"&gt; model&lt;span&gt;=&lt;/span&gt;model,&lt;/span&gt;
&lt;span id="cb59-21"&gt; args&lt;span&gt;=&lt;/span&gt;args,&lt;/span&gt;
&lt;span id="cb59-22"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;sampled_train_data,&lt;/span&gt;
&lt;span id="cb59-23"&gt; eval_dataset&lt;span&gt;=&lt;/span&gt;test_data,&lt;/span&gt;
&lt;span id="cb59-24"&gt; metric&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"f1"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb59-25"&gt;)&lt;/span&gt;
&lt;span id="cb59-26"&gt;&lt;/span&gt;
&lt;span id="cb59-27"&gt;&lt;span&gt;# 훈련 루프&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-28"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb59-29"&gt;&lt;/span&gt;
&lt;span id="cb59-30"&gt;&lt;span&gt;# 테스트 데이터로 모델을 평가합니다&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb59-31"&gt;trainer.evaluate()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference.&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;{"model_id":"24b020d35aee4bc3b68fad0a662517ab","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; *****Running training*****
  Num unique pairs = 1280
  Batch size = 16
  Num epochs = 3&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;{'embedding_loss': 0.3226, 'grad_norm': 1.9545438289642334, 'learning_rate': 8.333333333333333e-07, 'epoch': 0.0125}
{'embedding_loss': 0.1147, 'grad_norm': 0.20879538357257843, 'learning_rate': 1.7592592592592595e-05, 'epoch': 0.625}
{'embedding_loss': 0.0009, 'grad_norm': 0.026085715740919113, 'learning_rate': 1.2962962962962964e-05, 'epoch': 1.25}
{'embedding_loss': 0.0004, 'grad_norm': 0.016781330108642578, 'learning_rate': 8.333333333333334e-06, 'epoch': 1.875}
{'embedding_loss': 0.0003, 'grad_norm': 0.011119991540908813, 'learning_rate': 3.7037037037037037e-06, 'epoch': 2.5}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;{"model_id":"3734eb3a74714de8861ac0808da45d83","version_major":2,"version_minor":0,"quarto_mimetype":"application/vnd.jupyter.widget-view+json"}&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; *****Running evaluation***** &lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;{'train_runtime': 13.0872, 'train_samples_per_second': 293.417, 'train_steps_per_second': 18.339, 'train_loss': 0.025163489832387618, 'epoch': 3.0}&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;{'f1': 0.8462273161413563}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;32개의 레이블된 문서만으로 약 0.85의 F1 점수를 얻었습니다. 원본 데이터의 아주 작은 부분집합으로만 모델을 훈련시켰다는 점을 고려하면 이는 매우 인상적인 결과입니다! SetFit은 적은 샷 분류 작업을 수행할 수 있을 뿐만 아니라 레이블이 전혀 없는 경우인 제로샷 분류에도 대응할 수 있습니다.&lt;/p&gt;


&lt;br&gt;



&lt;h2&gt;
&lt;span&gt;3.3&lt;/span&gt; 생성 모델 미세 조정&lt;/h2&gt;

&lt;h3&gt;
&lt;span&gt;3.3.1&lt;/span&gt; 지도 학습 미세 조정 (SFT)&lt;/h3&gt;

&lt;h4&gt;
&lt;span&gt;3.3.1.1&lt;/span&gt; 전체 미세 조정&lt;/h4&gt;
&lt;p&gt;가장 일반적인 미세 조정 과정은 전체 미세 조정입니다. LLM을 사전 학습하는 것과 마찬가지로, 이 과정은 목표로 하는 지도 학습 미세 조정(SFT) 작업에 맞춰 모델의 모든 매개변수를 업데이트하는 것을 포함합니다.&lt;/p&gt;



&lt;h4&gt;
&lt;span&gt;3.3.1.2&lt;/span&gt; 매개변수 효율적 미세 조정 (PEFT)&lt;/h4&gt;
&lt;p&gt;모델의 모든 매개변수를 업데이트하는 것은 성능을 크게 향상시킬 수 있는 잠재력이 있지만 몇 가지 단점이 있습니다. 훈련 비용이 많이 들고, 훈련 시간이 길며, 상당한 저장 공간이 필요합니다. 이러한 문제를 해결하기 위해, 더 높은 계산 효율성으로 사전 학습된 모델을 미세 조정하는 데 중점을 둔 매개변수 효율적 미세 조정(PEFT) 대안에 관심이 모아지고 있습니다.&lt;/p&gt;

&lt;h5&gt;
&lt;span&gt;3.3.1.2.1&lt;/span&gt; 어댑터&lt;/h5&gt;
&lt;p&gt;어댑터는 많은 PEFT 기반 기술의 핵심 구성 요소입니다. 이 방법은 트랜스포머 내부에 추가적인 모듈식 구성 요소를 제안하며, 이를 미세 조정하여 모델의 모든 가중치를 미세 조정할 필요 없이 특정 작업에 대한 모델의 성능을 향상시킬 수 있습니다. 이는 많은 시간과 계산 자원을 절약합니다.&lt;/p&gt;



&lt;h5&gt;
&lt;span&gt;3.3.1.2.2&lt;/span&gt; 저순위 적응 (LoRA)&lt;/h5&gt;
&lt;p&gt;어댑터의 대안으로, 저순위 적응(LoRA)이 소개되었으며 현재 PEFT를 위한 널리 사용되고 효과적인 기술입니다. LoRA는 (어댑터와 마찬가지로) 작은 수의 매개변수만 업데이트하면 되는 기술입니다.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;더 효율적인 훈련을 위한 모델 압축: LoRA를 더욱 효율적으로 만들기 위해 원래 가중치를 더 작은 행렬로 투영하기 전에 모델의 원래 가중치의 메모리 요구 사항을 줄일 수 있습니다. LLM의 가중치는 float64나 float32와 같은 비트 수로 표현될 수 있는 주어진 정밀도를 가진 숫자 값입니다.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb66-1"&gt;&lt;span&gt;import&lt;/span&gt; torch&lt;/span&gt;
&lt;span id="cb66-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb66-3"&gt; AutoTokenizer,&lt;/span&gt;
&lt;span id="cb66-4"&gt; AutoModelForCausalLM,&lt;/span&gt;
&lt;span id="cb66-5"&gt; AutoTokenizer,&lt;/span&gt;
&lt;span id="cb66-6"&gt; BitsAndBytesConfig,&lt;/span&gt;
&lt;span id="cb66-7"&gt; TrainingArguments,&lt;/span&gt;
&lt;span id="cb66-8"&gt;)&lt;/span&gt;
&lt;span id="cb66-9"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb66-10"&gt;&lt;span&gt;from&lt;/span&gt; trl &lt;span&gt;import&lt;/span&gt; SFTTrainer&lt;/span&gt;
&lt;span id="cb66-11"&gt;&lt;span&gt;from&lt;/span&gt; peft &lt;span&gt;import&lt;/span&gt; LoraConfig, prepare_model_for_kbit_training, get_peft_model&lt;/span&gt;
&lt;span id="cb66-12"&gt;&lt;/span&gt;
&lt;span id="cb66-13"&gt;output_dir &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"./model"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-14"&gt;&lt;/span&gt;
&lt;span id="cb66-15"&gt;&lt;span&gt;# TinyLlama의 채팅 템플릿을 사용하기 위해 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-16"&gt;template_tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(&lt;span&gt;"TinyLlama/TinyLlama-1.1B-Chat-v1.0"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb66-17"&gt;&lt;/span&gt;
&lt;span id="cb66-18"&gt;&lt;/span&gt;
&lt;span id="cb66-19"&gt;&lt;span&gt;def&lt;/span&gt; format_prompt(example):&lt;/span&gt;
&lt;span id="cb66-20"&gt; &lt;span&gt;"""TinyLLama가 사용하는 &amp;lt;|user|&amp;gt; 템플릿을 사용하여 프롬프트 포맷"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-21"&gt;&lt;/span&gt;
&lt;span id="cb66-22"&gt; &lt;span&gt;# 답변 포맷&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-23"&gt; chat &lt;span&gt;=&lt;/span&gt; example[&lt;span&gt;"messages"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb66-24"&gt; prompt &lt;span&gt;=&lt;/span&gt; template_tokenizer.apply_chat_template(chat, tokenize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb66-25"&gt;&lt;/span&gt;
&lt;span id="cb66-26"&gt; &lt;span&gt;return&lt;/span&gt; {&lt;span&gt;"text"&lt;/span&gt;: prompt}&lt;/span&gt;
&lt;span id="cb66-27"&gt;&lt;/span&gt;
&lt;span id="cb66-28"&gt;&lt;/span&gt;
&lt;span id="cb66-29"&gt;&lt;span&gt;# TinyLLama가 사용하는 템플릿을 사용하여 데이터 로드 및 포맷&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-30"&gt;dataset &lt;span&gt;=&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb66-31"&gt; load_dataset(&lt;span&gt;"HuggingFaceH4/ultrachat_200k"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"test_sft"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb66-32"&gt; .shuffle(seed&lt;span&gt;=&lt;/span&gt;&lt;span&gt;42&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb66-33"&gt; .select(&lt;span&gt;range&lt;/span&gt;(&lt;span&gt;3_000&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb66-34"&gt;)&lt;/span&gt;
&lt;span id="cb66-35"&gt;dataset &lt;span&gt;=&lt;/span&gt; dataset.&lt;span&gt;map&lt;/span&gt;(format_prompt)&lt;/span&gt;
&lt;span id="cb66-36"&gt;&lt;/span&gt;
&lt;span id="cb66-37"&gt;model_name &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-38"&gt;&lt;/span&gt;
&lt;span id="cb66-39"&gt;&lt;span&gt;# 4비트 양자화 설정 - QLoRA의 Q&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-40"&gt;bnb_config &lt;span&gt;=&lt;/span&gt; BitsAndBytesConfig(&lt;/span&gt;
&lt;span id="cb66-41"&gt; load_in_4bit&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, &lt;span&gt;# 4비트 정밀도 모델 로딩 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-42"&gt; bnb_4bit_quant_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"nf4"&lt;/span&gt;, &lt;span&gt;# 양자화 유형&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-43"&gt; bnb_4bit_compute_dtype&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"float16"&lt;/span&gt;, &lt;span&gt;# 계산 데이터 타입&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-44"&gt; bnb_4bit_use_double_quant&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, &lt;span&gt;# 중첩 양자화 적용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-45"&gt;)&lt;/span&gt;
&lt;span id="cb66-46"&gt;&lt;/span&gt;
&lt;span id="cb66-47"&gt;&lt;span&gt;# GPU에서 훈련할 모델 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-48"&gt;model &lt;span&gt;=&lt;/span&gt; AutoModelForCausalLM.from_pretrained(&lt;/span&gt;
&lt;span id="cb66-49"&gt; model_name,&lt;/span&gt;
&lt;span id="cb66-50"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"auto"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-51"&gt; &lt;span&gt;# 일반 SFT의 경우 이 부분 제외&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-52"&gt; quantization_config&lt;span&gt;=&lt;/span&gt;bnb_config,&lt;/span&gt;
&lt;span id="cb66-53"&gt;)&lt;/span&gt;
&lt;span id="cb66-54"&gt;model.config.use_cache &lt;span&gt;=&lt;/span&gt; &lt;span&gt;False&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-55"&gt;model.config.pretraining_tp &lt;span&gt;=&lt;/span&gt; &lt;span&gt;1&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-56"&gt;&lt;/span&gt;
&lt;span id="cb66-57"&gt;&lt;span&gt;# LLaMA 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-58"&gt;tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(model_name, trust_remote_code&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb66-59"&gt;tokenizer.pad_token &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"&amp;lt;PAD&amp;gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-60"&gt;tokenizer.padding_side &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"left"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-61"&gt;tokenizer.chat_template &lt;span&gt;=&lt;/span&gt; template_tokenizer.chat_template&lt;/span&gt;
&lt;span id="cb66-62"&gt;&lt;/span&gt;
&lt;span id="cb66-63"&gt;&lt;span&gt;# LoRA 설정 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-64"&gt;peft_config &lt;span&gt;=&lt;/span&gt; LoraConfig(&lt;/span&gt;
&lt;span id="cb66-65"&gt; lora_alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;, &lt;span&gt;# LoRA 스케일링&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-66"&gt; lora_dropout&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;, &lt;span&gt;# LoRA 레이어의 드롭아웃&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-67"&gt; r&lt;span&gt;=&lt;/span&gt;&lt;span&gt;64&lt;/span&gt;, &lt;span&gt;# 랭크&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-68"&gt; bias&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"none"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-69"&gt; task_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"CAUSAL_LM"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-70"&gt; target_modules&lt;span&gt;=&lt;/span&gt;[ &lt;span&gt;# 대상 레이어&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-71"&gt; &lt;span&gt;"k_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-72"&gt; &lt;span&gt;"gate_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-73"&gt; &lt;span&gt;"v_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-74"&gt; &lt;span&gt;"up_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-75"&gt; &lt;span&gt;"q_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-76"&gt; &lt;span&gt;"o_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-77"&gt; &lt;span&gt;"down_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-78"&gt; ],&lt;/span&gt;
&lt;span id="cb66-79"&gt;)&lt;/span&gt;
&lt;span id="cb66-80"&gt;&lt;/span&gt;
&lt;span id="cb66-81"&gt;&lt;span&gt;# 훈련을 위한 모델 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-82"&gt;model &lt;span&gt;=&lt;/span&gt; prepare_model_for_kbit_training(model)&lt;/span&gt;
&lt;span id="cb66-83"&gt;model &lt;span&gt;=&lt;/span&gt; get_peft_model(model, peft_config)&lt;/span&gt;
&lt;span id="cb66-84"&gt;&lt;/span&gt;
&lt;span id="cb66-85"&gt;&lt;span&gt;# 훈련 인자&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-86"&gt;training_arguments &lt;span&gt;=&lt;/span&gt; TrainingArguments(&lt;/span&gt;
&lt;span id="cb66-87"&gt; output_dir&lt;span&gt;=&lt;/span&gt;output_dir,&lt;/span&gt;
&lt;span id="cb66-88"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-89"&gt; gradient_accumulation_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;4&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-90"&gt; optim&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"paged_adamw_32bit"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-91"&gt; learning_rate&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2e-4&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-92"&gt; lr_scheduler_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-93"&gt; num_train_epochs&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-94"&gt; logging_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-95"&gt; fp16&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-96"&gt; gradient_checkpointing&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-97"&gt; disable_tqdm&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb66-98"&gt;)&lt;/span&gt;
&lt;span id="cb66-99"&gt;&lt;/span&gt;
&lt;span id="cb66-100"&gt;&lt;span&gt;# 지도 미세조정 매개변수 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-101"&gt;trainer &lt;span&gt;=&lt;/span&gt; SFTTrainer(&lt;/span&gt;
&lt;span id="cb66-102"&gt; model&lt;span&gt;=&lt;/span&gt;model,&lt;/span&gt;
&lt;span id="cb66-103"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;dataset,&lt;/span&gt;
&lt;span id="cb66-104"&gt; &lt;span&gt;# dataset_text_field="text",&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-105"&gt; tokenizer&lt;span&gt;=&lt;/span&gt;tokenizer,&lt;/span&gt;
&lt;span id="cb66-106"&gt; args&lt;span&gt;=&lt;/span&gt;training_arguments,&lt;/span&gt;
&lt;span id="cb66-107"&gt; &lt;span&gt;# max_seq_length=512,&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-108"&gt; &lt;span&gt;# 일반 SFT의 경우 이 부분 제외&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-109"&gt; peft_config&lt;span&gt;=&lt;/span&gt;peft_config,&lt;/span&gt;
&lt;span id="cb66-110"&gt;)&lt;/span&gt;
&lt;span id="cb66-111"&gt;&lt;/span&gt;
&lt;span id="cb66-112"&gt;&lt;span&gt;# 모델 훈련&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-113"&gt;trainer.train()&lt;/span&gt;
&lt;span id="cb66-114"&gt;&lt;/span&gt;
&lt;span id="cb66-115"&gt;&lt;span&gt;# QLoRA 가중치 저장&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb66-116"&gt;trainer.model.save_pretrained(&lt;span&gt;"./model/TinyLlama-1.1B-qlora"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


    
      
      
      [375/375 06:35, Epoch 1/1]
    
    
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;5.425600&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;5.160700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;5.117600&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;pre&gt;&lt;code&gt;&lt;span id="cb67-1"&gt;&lt;span&gt;from&lt;/span&gt; peft &lt;span&gt;import&lt;/span&gt; AutoPeftModelForCausalLM&lt;/span&gt;
&lt;span id="cb67-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; pipeline&lt;/span&gt;
&lt;span id="cb67-3"&gt;&lt;/span&gt;
&lt;span id="cb67-4"&gt;model &lt;span&gt;=&lt;/span&gt; AutoPeftModelForCausalLM.from_pretrained(&lt;/span&gt;
&lt;span id="cb67-5"&gt; &lt;span&gt;"./model/TinyLlama-1.1B-qlora"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb67-6"&gt; low_cpu_mem_usage&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb67-7"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"auto"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb67-8"&gt;)&lt;/span&gt;
&lt;span id="cb67-9"&gt;&lt;/span&gt;
&lt;span id="cb67-10"&gt;&lt;span&gt;# LoRA와 기본 모델 병합&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-11"&gt;merged_model &lt;span&gt;=&lt;/span&gt; model.merge_and_unload()&lt;/span&gt;
&lt;span id="cb67-12"&gt;&lt;/span&gt;
&lt;span id="cb67-13"&gt;&lt;span&gt;# 미리 정의된 프롬프트 템플릿 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-14"&gt;prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"""&amp;lt;|user|&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-15"&gt;&lt;span&gt;독감 예방 접종이 필요한 이유에 대해 간단히 설명해줘.&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-16"&gt;&lt;span&gt;&amp;lt;|assistant|&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-17"&gt;&lt;span&gt;"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-18"&gt;&lt;span&gt;# 튜닝된 모델 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb67-19"&gt;pipe &lt;span&gt;=&lt;/span&gt; pipeline(task&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"text-generation"&lt;/span&gt;, model&lt;span&gt;=&lt;/span&gt;merged_model, tokenizer&lt;span&gt;=&lt;/span&gt;tokenizer)&lt;/span&gt;
&lt;span id="cb67-20"&gt;pipe(prompt)[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;'&amp;lt;|user|&amp;gt;\n독감 예방 접종이 필요한 이유에 대해 간단히 설명해줘.&amp;lt;/s&amp;gt;\n&amp;lt;|assistant|&amp;gt;\nThe reason for preventive treatment is to prevent the spread of the disease and to reduce the risk of complications. This is especially important for people with underlying health conditions, such as diabetes or high blood pressure, who are at higher risk of developing complications.'&lt;/code&gt;&lt;/pre&gt;


&lt;br&gt;



&lt;h3&gt;
&lt;span&gt;3.3.2&lt;/span&gt; 생성 모델 평가&lt;/h3&gt;
&lt;p&gt;생성 모델을 평가하는 것은 상당한 도전 과제입니다.&lt;/p&gt;

&lt;h4&gt;
&lt;span&gt;3.3.2.1&lt;/span&gt; 단어 수준 지표&lt;/h4&gt;
&lt;p&gt;생성 모델을 비교하는 데 흔히 사용되는 지표 범주 중 하나는 단어 수준 평가입니다. 이러한 전통적인 기법들은 참조 데이터셋과 생성된 토큰을 토큰(집합) 수준에서 비교합니다. 일반적인 단어 수준 지표로는 혼란도(perplexity), ROUGE, BLEU, BERTScore 등이 있습니다.&lt;/p&gt;



&lt;h4&gt;
&lt;span&gt;3.3.2.2&lt;/span&gt; 벤치마크&lt;/h4&gt;
&lt;p&gt;언어 생성 및 이해 작업에 대한 생성 모델을 평가하는 일반적인 방법은 MMLU, GLUE, TruthfulQA, GSM8k, HellaSwag와 같은 잘 알려진 공개 벤치마크를 사용하는 것입니다.&lt;/p&gt;



&lt;h4&gt;
&lt;span&gt;3.3.2.3&lt;/span&gt; 리더보드&lt;/h4&gt;
&lt;p&gt;다양한 벤치마크가 존재하기 때문에 어떤 벤치마크가 자신의 모델에 가장 적합한지 선택하기 어려울 수 있습니다. 모델이 공개될 때마다 여러 벤치마크에서 평가되어 전반적인 성능을 보여주는 경우가 많습니다.&lt;/p&gt;
&lt;p&gt;이에 따라 여러 벤치마크를 포함하는 리더보드가 개발되었습니다. 일반적인 리더보드로는 Open LLM Leaderboard가 있으며, 현재 HellaSwag, MMLU, TruthfulQA, GSM8k 등 6개의 벤치마크를 포함하고 있습니다.&lt;/p&gt;





&lt;h3&gt;
&lt;span&gt;3.3.3&lt;/span&gt; 선호도 튜닝 (PPO/DPO)&lt;/h3&gt;
&lt;p&gt;모델이 지시를 따를 수 있게 되었더라도, 다양한 상황에서 우리가 기대하는 대로 행동하도록 최종 훈련 단계를 통해 더욱 개선할 수 있습니다. 예를 들어, “LLM이란 무엇인가요?”라는 질문에 대해 “대규모 언어 모델입니다”라는 간단한 답변보다는 LLM의 내부 구조를 자세히 설명하는 답변을 선호할 수 있습니다. 그렇다면 어떻게 하나의 답변을 다른 답변보다 선호하는 우리의 (인간의) 선호도를 LLM의 출력과 일치시킬 수 있을까요?&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb69-1"&gt;&lt;span&gt;from&lt;/span&gt; datasets &lt;span&gt;import&lt;/span&gt; load_dataset&lt;/span&gt;
&lt;span id="cb69-2"&gt;&lt;span&gt;from&lt;/span&gt; peft &lt;span&gt;import&lt;/span&gt; (&lt;/span&gt;
&lt;span id="cb69-3"&gt; AutoPeftModelForCausalLM,&lt;/span&gt;
&lt;span id="cb69-4"&gt; LoraConfig,&lt;/span&gt;
&lt;span id="cb69-5"&gt; prepare_model_for_kbit_training,&lt;/span&gt;
&lt;span id="cb69-6"&gt; get_peft_model,&lt;/span&gt;
&lt;span id="cb69-7"&gt;)&lt;/span&gt;
&lt;span id="cb69-8"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; BitsAndBytesConfig, AutoTokenizer, logging&lt;/span&gt;
&lt;span id="cb69-9"&gt;&lt;span&gt;from&lt;/span&gt; trl &lt;span&gt;import&lt;/span&gt; DPOConfig, DPOTrainer&lt;/span&gt;
&lt;span id="cb69-10"&gt;&lt;/span&gt;
&lt;span id="cb69-11"&gt;&lt;/span&gt;
&lt;span id="cb69-12"&gt;&lt;span&gt;# 데이터 전처리&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-13"&gt;&lt;span&gt;def&lt;/span&gt; format_prompt(example):&lt;/span&gt;
&lt;span id="cb69-14"&gt; &lt;span&gt;"""TinyLLama가 사용하는 &amp;lt;|user|&amp;gt; 템플릿을 사용하여 프롬프트 포맷"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-15"&gt; system &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"&amp;lt;|system|&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt; &lt;span&gt;+&lt;/span&gt; example[&lt;span&gt;"system"&lt;/span&gt;] &lt;span&gt;+&lt;/span&gt; &lt;span&gt;"&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-16"&gt; prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"&amp;lt;|user|&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt; &lt;span&gt;+&lt;/span&gt; example[&lt;span&gt;"input"&lt;/span&gt;] &lt;span&gt;+&lt;/span&gt; &lt;span&gt;"&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;&amp;lt;|assistant|&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-17"&gt; chosen &lt;span&gt;=&lt;/span&gt; example[&lt;span&gt;"chosen"&lt;/span&gt;] &lt;span&gt;+&lt;/span&gt; &lt;span&gt;"&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-18"&gt; rejected &lt;span&gt;=&lt;/span&gt; example[&lt;span&gt;"rejected"&lt;/span&gt;] &lt;span&gt;+&lt;/span&gt; &lt;span&gt;"&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;span&gt;\n&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-19"&gt; &lt;span&gt;return&lt;/span&gt; {&lt;/span&gt;
&lt;span id="cb69-20"&gt; &lt;span&gt;"prompt"&lt;/span&gt;: system &lt;span&gt;+&lt;/span&gt; prompt,&lt;/span&gt;
&lt;span id="cb69-21"&gt; &lt;span&gt;"chosen"&lt;/span&gt;: chosen,&lt;/span&gt;
&lt;span id="cb69-22"&gt; &lt;span&gt;"rejected"&lt;/span&gt;: rejected,&lt;/span&gt;
&lt;span id="cb69-23"&gt; }&lt;/span&gt;
&lt;span id="cb69-24"&gt;&lt;/span&gt;
&lt;span id="cb69-25"&gt;&lt;/span&gt;
&lt;span id="cb69-26"&gt;&lt;span&gt;# 데이터셋에 포맷 적용 및 비교적 짧은 답변 선택&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-27"&gt;dpo_dataset &lt;span&gt;=&lt;/span&gt; load_dataset(&lt;span&gt;"argilla/distilabel-intel-orca-dpo-pairs"&lt;/span&gt;, split&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"train"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb69-28"&gt;dpo_dataset &lt;span&gt;=&lt;/span&gt; dpo_dataset.&lt;span&gt;filter&lt;/span&gt;(&lt;/span&gt;
&lt;span id="cb69-29"&gt; &lt;span&gt;lambda&lt;/span&gt; r: r[&lt;span&gt;"status"&lt;/span&gt;] &lt;span&gt;!=&lt;/span&gt; &lt;span&gt;"tie"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-30"&gt; &lt;span&gt;and&lt;/span&gt; r[&lt;span&gt;"chosen_score"&lt;/span&gt;] &lt;span&gt;&amp;gt;=&lt;/span&gt; &lt;span&gt;8&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-31"&gt; &lt;span&gt;and&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; r[&lt;span&gt;"in_gsm8k_train"&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb69-32"&gt;)&lt;/span&gt;
&lt;span id="cb69-33"&gt;dpo_dataset &lt;span&gt;=&lt;/span&gt; dpo_dataset.&lt;span&gt;map&lt;/span&gt;(format_prompt, remove_columns&lt;span&gt;=&lt;/span&gt;dpo_dataset.column_names)&lt;/span&gt;
&lt;span id="cb69-34"&gt;&lt;/span&gt;
&lt;span id="cb69-35"&gt;&lt;span&gt;# 모델 양자화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-36"&gt;&lt;span&gt;# 4비트 양자화 설정 - QLoRA의 Q&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-37"&gt;bnb_config &lt;span&gt;=&lt;/span&gt; BitsAndBytesConfig(&lt;/span&gt;
&lt;span id="cb69-38"&gt; load_in_4bit&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, &lt;span&gt;# 4비트 정밀도 모델 로딩 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-39"&gt; bnb_4bit_quant_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"nf4"&lt;/span&gt;, &lt;span&gt;# 양자화 타입&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-40"&gt; bnb_4bit_compute_dtype&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"float16"&lt;/span&gt;, &lt;span&gt;# 계산 데이터 타입&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-41"&gt; bnb_4bit_use_double_quant&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;, &lt;span&gt;# 중첩 양자화 적용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-42"&gt;)&lt;/span&gt;
&lt;span id="cb69-43"&gt;&lt;/span&gt;
&lt;span id="cb69-44"&gt;&lt;span&gt;# LoRA와 기본 모델 병합&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-45"&gt;model &lt;span&gt;=&lt;/span&gt; AutoPeftModelForCausalLM.from_pretrained(&lt;/span&gt;
&lt;span id="cb69-46"&gt; &lt;span&gt;"./model/TinyLlama-1.1B-qlora"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-47"&gt; low_cpu_mem_usage&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-48"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"auto"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-49"&gt; quantization_config&lt;span&gt;=&lt;/span&gt;bnb_config,&lt;/span&gt;
&lt;span id="cb69-50"&gt;)&lt;/span&gt;
&lt;span id="cb69-51"&gt;merged_model &lt;span&gt;=&lt;/span&gt; model.merge_and_unload()&lt;/span&gt;
&lt;span id="cb69-52"&gt;&lt;/span&gt;
&lt;span id="cb69-53"&gt;&lt;span&gt;# LLaMA 토크나이저 로드&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-54"&gt;model_name &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-55"&gt;tokenizer &lt;span&gt;=&lt;/span&gt; AutoTokenizer.from_pretrained(model_name, trust_remote_code&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb69-56"&gt;tokenizer.pad_token &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"&amp;lt;PAD&amp;gt;"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-57"&gt;tokenizer.padding_side &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"left"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-58"&gt;&lt;/span&gt;
&lt;span id="cb69-59"&gt;&lt;span&gt;# LoRA 설정 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-60"&gt;peft_config &lt;span&gt;=&lt;/span&gt; LoraConfig(&lt;/span&gt;
&lt;span id="cb69-61"&gt; lora_alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;32&lt;/span&gt;, &lt;span&gt;# LoRA 스케일링&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-62"&gt; lora_dropout&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;, &lt;span&gt;# LoRA 레이어의 드롭아웃&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-63"&gt; r&lt;span&gt;=&lt;/span&gt;&lt;span&gt;64&lt;/span&gt;, &lt;span&gt;# 랭크&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-64"&gt; bias&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"none"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-65"&gt; task_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"CAUSAL_LM"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-66"&gt; target_modules&lt;span&gt;=&lt;/span&gt;[&lt;/span&gt;
&lt;span id="cb69-67"&gt; &lt;span&gt;# 대상 레이어&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-68"&gt; &lt;span&gt;"k_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-69"&gt; &lt;span&gt;"gate_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-70"&gt; &lt;span&gt;"v_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-71"&gt; &lt;span&gt;"up_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-72"&gt; &lt;span&gt;"q_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-73"&gt; &lt;span&gt;"o_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-74"&gt; &lt;span&gt;"down_proj"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-75"&gt; ],&lt;/span&gt;
&lt;span id="cb69-76"&gt;)&lt;/span&gt;
&lt;span id="cb69-77"&gt;&lt;/span&gt;
&lt;span id="cb69-78"&gt;&lt;span&gt;# 훈련을 위한 모델 준비&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-79"&gt;model &lt;span&gt;=&lt;/span&gt; prepare_model_for_kbit_training(model)&lt;/span&gt;
&lt;span id="cb69-80"&gt;model &lt;span&gt;=&lt;/span&gt; get_peft_model(model, peft_config)&lt;/span&gt;
&lt;span id="cb69-81"&gt;&lt;/span&gt;
&lt;span id="cb69-82"&gt;output_dir &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"./model"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-83"&gt;&lt;/span&gt;
&lt;span id="cb69-84"&gt;&lt;span&gt;# 훈련 인자&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-85"&gt;training_arguments &lt;span&gt;=&lt;/span&gt; DPOConfig(&lt;/span&gt;
&lt;span id="cb69-86"&gt; output_dir&lt;span&gt;=&lt;/span&gt;output_dir,&lt;/span&gt;
&lt;span id="cb69-87"&gt; per_device_train_batch_size&lt;span&gt;=&lt;/span&gt;&lt;span&gt;2&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-88"&gt; gradient_accumulation_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;4&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-89"&gt; optim&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"paged_adamw_32bit"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-90"&gt; learning_rate&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1e-5&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-91"&gt; lr_scheduler_type&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"cosine"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-92"&gt; max_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;500&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-93"&gt; logging_steps&lt;span&gt;=&lt;/span&gt;&lt;span&gt;100&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-94"&gt; fp16&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-95"&gt; gradient_checkpointing&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-96"&gt; warmup_ratio&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-97"&gt; beta&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.1&lt;/span&gt;, &lt;span&gt;# beta 값을 여기에 추가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-98"&gt; max_prompt_length&lt;span&gt;=&lt;/span&gt;&lt;span&gt;512&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-99"&gt; max_length&lt;span&gt;=&lt;/span&gt;&lt;span&gt;512&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-100"&gt; disable_tqdm&lt;span&gt;=&lt;/span&gt;&lt;span&gt;False&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb69-101"&gt;)&lt;/span&gt;
&lt;span id="cb69-102"&gt;&lt;/span&gt;
&lt;span id="cb69-103"&gt;&lt;span&gt;# DPO 트레이너 생성&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-104"&gt;dpo_trainer &lt;span&gt;=&lt;/span&gt; DPOTrainer(&lt;/span&gt;
&lt;span id="cb69-105"&gt; model,&lt;/span&gt;
&lt;span id="cb69-106"&gt; args&lt;span&gt;=&lt;/span&gt;training_arguments,&lt;/span&gt;
&lt;span id="cb69-107"&gt; train_dataset&lt;span&gt;=&lt;/span&gt;dpo_dataset,&lt;/span&gt;
&lt;span id="cb69-108"&gt; processing_class&lt;span&gt;=&lt;/span&gt;tokenizer, &lt;span&gt;# tokenizer 대신 processing_class 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-109"&gt; peft_config&lt;span&gt;=&lt;/span&gt;peft_config,&lt;/span&gt;
&lt;span id="cb69-110"&gt;)&lt;/span&gt;
&lt;span id="cb69-111"&gt;&lt;/span&gt;
&lt;span id="cb69-112"&gt;&lt;span&gt;# DPO로 모델 미세조정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-113"&gt;dpo_trainer.train()&lt;/span&gt;
&lt;span id="cb69-114"&gt;&lt;/span&gt;
&lt;span id="cb69-115"&gt;&lt;span&gt;# 어댑터 저장&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb69-116"&gt;dpo_trainer.model.save_pretrained(&lt;span&gt;"./model/TinyLlama-1.1B-dpo-qlora"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


    
      
      
      [500/500 10:36, Epoch 0/1]
    
    
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Training Loss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;0.593300&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;0.485700&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;300&lt;/td&gt;
&lt;td&gt;0.520400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;td&gt;0.476500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500&lt;/td&gt;
&lt;td&gt;0.489200&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;pre&gt;&lt;code&gt;&lt;span id="cb70-1"&gt;&lt;span&gt;from&lt;/span&gt; peft &lt;span&gt;import&lt;/span&gt; PeftModel&lt;/span&gt;
&lt;span id="cb70-2"&gt;&lt;span&gt;from&lt;/span&gt; transformers &lt;span&gt;import&lt;/span&gt; pipeline&lt;/span&gt;
&lt;span id="cb70-3"&gt;&lt;/span&gt;
&lt;span id="cb70-4"&gt;&lt;span&gt;# LoRA와 기본 모델 병합&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-5"&gt;model &lt;span&gt;=&lt;/span&gt; AutoPeftModelForCausalLM.from_pretrained(&lt;/span&gt;
&lt;span id="cb70-6"&gt; &lt;span&gt;"./model/TinyLlama-1.1B-qlora"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb70-7"&gt; low_cpu_mem_usage&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb70-8"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"auto"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb70-9"&gt;)&lt;/span&gt;
&lt;span id="cb70-10"&gt;sft_model &lt;span&gt;=&lt;/span&gt; model.merge_and_unload()&lt;/span&gt;
&lt;span id="cb70-11"&gt;&lt;/span&gt;
&lt;span id="cb70-12"&gt;&lt;span&gt;# DPO LoRA와 SFT 모델 병합&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-13"&gt;dpo_model &lt;span&gt;=&lt;/span&gt; PeftModel.from_pretrained(&lt;/span&gt;
&lt;span id="cb70-14"&gt; sft_model,&lt;/span&gt;
&lt;span id="cb70-15"&gt; &lt;span&gt;"./model/TinyLlama-1.1B-dpo-qlora"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb70-16"&gt; device_map&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"auto"&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb70-17"&gt;)&lt;/span&gt;
&lt;span id="cb70-18"&gt;dpo_model &lt;span&gt;=&lt;/span&gt; dpo_model.merge_and_unload()&lt;/span&gt;
&lt;span id="cb70-19"&gt;&lt;/span&gt;
&lt;span id="cb70-20"&gt;&lt;span&gt;# 정의된 프롬프트 템플릿 사용&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-21"&gt;prompt &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"""&amp;lt;|user|&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-22"&gt;&lt;span&gt;독감 예방 접종의 중요성에 대해 설명해.&amp;lt;/s&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-23"&gt;&lt;span&gt;&amp;lt;|assistant|&amp;gt;&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-24"&gt;&lt;span&gt;"""&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-25"&gt;&lt;/span&gt;
&lt;span id="cb70-26"&gt;&lt;span&gt;# 튜닝된 모델 실행&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb70-27"&gt;pipe &lt;span&gt;=&lt;/span&gt; pipeline(task&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"text-generation"&lt;/span&gt;, model&lt;span&gt;=&lt;/span&gt;dpo_model, tokenizer&lt;span&gt;=&lt;/span&gt;tokenizer)&lt;/span&gt;
&lt;span id="cb70-28"&gt;pipe(prompt)[&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;"generated_text"&lt;/span&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;'&amp;lt;|user|&amp;gt;\n독감 예방 접종의 중요성에 대해 설명해.&amp;lt;/s&amp;gt;\n&amp;lt;|assistant|&amp;gt;\nThe importance of preventive treatment in the prevention of chronic diseases has been recognized for centuries. Chronic diseases such as heart disease, stroke, diabetes, and cancer are the leading causes of death worldwide. Preventive treatment is essential to reduce the risk of developing these diseases and improve the quality of life for patients.\n\nPreventive treatment involves a combination of lifestyle changes, medications, and medical interventions. These interventions aim to reduce the risk of developing chronic diseases by modifying the lifestyle of the patient, such as smoking cessation, physical activity, and dietary modification.\n\nPreventive treatment is often recommended for patients with high risk of developing chronic diseases, such as those with a family history of heart disease, diabetes, or cancer. Patients with these risk factors should be screened regularly for early detection of the disease and receive preventive treatment as soon as possible.\n\nIn addition to preventive treatment, patients with chronic diseases should be monitored regularly to detect any changes in their condition and to ensure that they receive the appropriate treatment. This monitoring can help to identify early signs of disease progression and to prevent complications.\n\nIn conclusion, preventive treatment is essential to reduce the risk of developing chronic diseases and improve the quality of life for patients. By following a healthy lifestyle, making lifestyle changes, and receiving preventive treatment, patients can reduce their risk of developing chronic diseases and improve their overall health.'&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;우리가 살펴본 미세 조정 과정은 두 단계로 이루어집니다. 첫 번째 단계에서는 사전 학습된 LLM에 지시 데이터를 사용하여 지도 학습 미세 조정을 수행했으며, 이를 흔히 지시 튜닝이라고 합니다. 이 결과로 채팅과 유사한 행동을 하고 지시를 정확히 따를 수 있는 모델이 만들어졌습니다.&lt;/p&gt;

&lt;p&gt;두 번째 단계에서는 정렬 데이터, 즉 어떤 유형의 답변이 다른 답변보다 선호되는지를 나타내는 데이터로 모델을 더욱 개선했습니다. 선호도 튜닝이라고 불리는 이 과정은 이전에 지시 튜닝된 모델에 인간의 선호도를 주입합니다.&lt;/p&gt;

&lt;p&gt;SFT+DPO의 조합을 통한 미세 조절은 훌륭한 방법이지만 두 번의 훈련 루프를 수행하고 두 과정에서 매개변수를 조정해야기 때문에 많은 계산 비용이 발생합니다. 이런 점을 극복하기 위해 새로운 방법들이 나오고 있는데 그 중에 주목할 만한 것은 Odds Ratio Preference Optimization(ORPO)으로 SFT와 DPO를 단일 훈련 과정으로 결합한 것입니다. 이 방법은 두 개의 별도 훈련 루프를 제거해 훈련 과정을 단순화하면서도 QLoRA의 사용을 가능하게 합니다.&lt;/p&gt;


&lt;br&gt;


&lt;h1&gt;
&lt;span&gt;4&lt;/span&gt; 마치며&lt;/h1&gt;

&lt;p&gt;이 글을 통해 우리는 LLM이 어떻게 분류, 생성, 언어 표현을 포함한 특정 작업에 사용할 수 있는지, 그리고 사전 학습된 LLM을 미세 조정하는 다양한 방법을 살펴봤습니다. 이런 기술을 익힘으로써 여러분들은 LLM을 활용해 혁신적인 솔루션을 만들 수 있을 것입니다. 마무리하면서 LLM에 대한 우리의 탐구는 아직 시작에 불과하다는 점을 강조하고 싶습니다. 앞으로 더 많은 흥미로운 발전이 있을 것이며 여러분이 이 분야의 진보를 계속 주시하기를 권장합니다.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;5&lt;/span&gt; 참고 자료&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/HandsOnLLM/Hands-On-Large-Language-Models" rel="noopener noreferrer"&gt;Hands on large language model repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rasbt/LLMs-from-scratch" rel="noopener noreferrer"&gt;LLM rfrom scratch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/philschmid/deep-learning-pytorch-huggingface" rel="noopener noreferrer"&gt;Deep learning with pytorch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;



</description>
      <category>llm</category>
      <category>machinelearning</category>
      <category>python</category>
    </item>
    <item>
      <title>타이핑으로 소비되는 칼로리</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Wed, 22 Jan 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ehottl/taipingeuro-sobidoeneun-kalrori-ehi</link>
      <guid>https://dev.to/ehottl/taipingeuro-sobidoeneun-kalrori-ehi</guid>
      <description>&lt;p&gt;문득 타이핑이 얼마나 많은 칼로리를 소모하는지 궁금해졌습니다. 하루 종일 키보드를 두드리면 꽤 많은 운동이 되지 않을까 하는 생각이 들었고 찾아보니 이미 계산해본 사람이 있었습니다.[^1] 이 글은 사실상 원저자의 글을 번역하고 추가로 코드 작성을 한 것입니다. 타이핑으로 소모되는 칼로리를 알아보기 위해 다음과 같은 단계로 계산해 보았습니다.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;1분 타이핑에 소모되는 칼로리 계산&lt;/li&gt;
&lt;li&gt;하루 동안의 타이핑 횟수 측정&lt;/li&gt;
&lt;li&gt;하루 총 소모 칼로리 계산&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;이 글은 비전문적임으로 재미로만 읽어 주시면 좋겠습니다.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
&lt;span&gt;1&lt;/span&gt; 1분 타이핑 칼로리 소모량 계산&lt;/h1&gt;

&lt;p&gt;타이핑 횟수를 측정하기 위해 아래와 같이 파이썬으로 간단한 GUI 프로그램을 만들었습니다. 혹시 나중에 개선할 지도 몰라서 &lt;a href="https://github.com/partrita/typecount" rel="noopener noreferrer"&gt;깃헙저장소&lt;/a&gt;에 올려두었습니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;1.1&lt;/span&gt; 파이썬 코드&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb1-1"&gt;&lt;span&gt;import&lt;/span&gt; tkinter &lt;span&gt;as&lt;/span&gt; tk&lt;/span&gt;
&lt;span id="cb1-2"&gt;&lt;span&gt;from&lt;/span&gt; pynput.keyboard &lt;span&gt;import&lt;/span&gt; Listener&lt;/span&gt;
&lt;span id="cb1-3"&gt;&lt;span&gt;import&lt;/span&gt; csv&lt;/span&gt;
&lt;span id="cb1-4"&gt;&lt;span&gt;from&lt;/span&gt; datetime &lt;span&gt;import&lt;/span&gt; date&lt;/span&gt;
&lt;span id="cb1-5"&gt;&lt;span&gt;import&lt;/span&gt; os&lt;/span&gt;
&lt;span id="cb1-6"&gt;&lt;/span&gt;
&lt;span id="cb1-7"&gt;&lt;span&gt;class&lt;/span&gt; TypingCounter:&lt;/span&gt;
&lt;span id="cb1-8"&gt; &lt;span&gt;def&lt;/span&gt; &lt;span&gt; __init__ &lt;/span&gt;(&lt;span&gt;self&lt;/span&gt;, master):&lt;/span&gt;
&lt;span id="cb1-9"&gt; &lt;span&gt;self&lt;/span&gt;.master &lt;span&gt;=&lt;/span&gt; master&lt;/span&gt;
&lt;span id="cb1-10"&gt; master.title(&lt;span&gt;"Typing Counter v0.2.0"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-11"&gt; master.geometry(&lt;span&gt;"200x200+100+100"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-12"&gt;&lt;/span&gt;
&lt;span id="cb1-13"&gt; &lt;span&gt;self&lt;/span&gt;.count &lt;span&gt;=&lt;/span&gt; &lt;span&gt;0&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-14"&gt; &lt;span&gt;self&lt;/span&gt;.is_counting &lt;span&gt;=&lt;/span&gt; &lt;span&gt;False&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-15"&gt; &lt;span&gt;self&lt;/span&gt;.csv_file &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"typing_count.csv"&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-16"&gt;&lt;/span&gt;
&lt;span id="cb1-17"&gt; &lt;span&gt;self&lt;/span&gt;.label &lt;span&gt;=&lt;/span&gt; tk.Label(master, text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Count: 0"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-18"&gt; &lt;span&gt;self&lt;/span&gt;.label.pack(pady&lt;span&gt;=&lt;/span&gt;&lt;span&gt;20&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-19"&gt;&lt;/span&gt;
&lt;span id="cb1-20"&gt; &lt;span&gt;self&lt;/span&gt;.start_button &lt;span&gt;=&lt;/span&gt; tk.Button(master, text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Start"&lt;/span&gt;, command&lt;span&gt;=&lt;/span&gt;&lt;span&gt;self&lt;/span&gt;.start_counting)&lt;/span&gt;
&lt;span id="cb1-21"&gt; &lt;span&gt;self&lt;/span&gt;.start_button.pack()&lt;/span&gt;
&lt;span id="cb1-22"&gt;&lt;/span&gt;
&lt;span id="cb1-23"&gt; &lt;span&gt;self&lt;/span&gt;.stop_button &lt;span&gt;=&lt;/span&gt; tk.Button(master, text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Stop"&lt;/span&gt;, command&lt;span&gt;=&lt;/span&gt;&lt;span&gt;self&lt;/span&gt;.stop_counting, state&lt;span&gt;=&lt;/span&gt;tk.DISABLED)&lt;/span&gt;
&lt;span id="cb1-24"&gt; &lt;span&gt;self&lt;/span&gt;.stop_button.pack()&lt;/span&gt;
&lt;span id="cb1-25"&gt;&lt;/span&gt;
&lt;span id="cb1-26"&gt; &lt;span&gt;self&lt;/span&gt;.save_button &lt;span&gt;=&lt;/span&gt; tk.Button(master, text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Save"&lt;/span&gt;, command&lt;span&gt;=&lt;/span&gt;&lt;span&gt;self&lt;/span&gt;.save_count)&lt;/span&gt;
&lt;span id="cb1-27"&gt; &lt;span&gt;self&lt;/span&gt;.save_button.pack()&lt;/span&gt;
&lt;span id="cb1-28"&gt;&lt;/span&gt;
&lt;span id="cb1-29"&gt; &lt;span&gt;self&lt;/span&gt;.quit_button &lt;span&gt;=&lt;/span&gt; tk.Button(master, text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"Quit"&lt;/span&gt;, command&lt;span&gt;=&lt;/span&gt;master.quit)&lt;/span&gt;
&lt;span id="cb1-30"&gt; &lt;span&gt;self&lt;/span&gt;.quit_button.pack()&lt;/span&gt;
&lt;span id="cb1-31"&gt;&lt;/span&gt;
&lt;span id="cb1-32"&gt; &lt;span&gt;self&lt;/span&gt;.listener &lt;span&gt;=&lt;/span&gt; &lt;span&gt;None&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-33"&gt;&lt;/span&gt;
&lt;span id="cb1-34"&gt; &lt;span&gt;def&lt;/span&gt; start_counting(&lt;span&gt;self&lt;/span&gt;):&lt;/span&gt;
&lt;span id="cb1-35"&gt; &lt;span&gt;self&lt;/span&gt;.is_counting &lt;span&gt;=&lt;/span&gt; &lt;span&gt;True&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-36"&gt; &lt;span&gt;self&lt;/span&gt;.start_button.config(state&lt;span&gt;=&lt;/span&gt;tk.DISABLED)&lt;/span&gt;
&lt;span id="cb1-37"&gt; &lt;span&gt;self&lt;/span&gt;.stop_button.config(state&lt;span&gt;=&lt;/span&gt;tk.NORMAL)&lt;/span&gt;
&lt;span id="cb1-38"&gt; &lt;span&gt;self&lt;/span&gt;.listener &lt;span&gt;=&lt;/span&gt; Listener(on_press&lt;span&gt;=&lt;/span&gt;&lt;span&gt;self&lt;/span&gt;.on_press)&lt;/span&gt;
&lt;span id="cb1-39"&gt; &lt;span&gt;self&lt;/span&gt;.listener.start()&lt;/span&gt;
&lt;span id="cb1-40"&gt;&lt;/span&gt;
&lt;span id="cb1-41"&gt; &lt;span&gt;def&lt;/span&gt; stop_counting(&lt;span&gt;self&lt;/span&gt;):&lt;/span&gt;
&lt;span id="cb1-42"&gt; &lt;span&gt;self&lt;/span&gt;.is_counting &lt;span&gt;=&lt;/span&gt; &lt;span&gt;False&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-43"&gt; &lt;span&gt;self&lt;/span&gt;.start_button.config(state&lt;span&gt;=&lt;/span&gt;tk.NORMAL)&lt;/span&gt;
&lt;span id="cb1-44"&gt; &lt;span&gt;self&lt;/span&gt;.stop_button.config(state&lt;span&gt;=&lt;/span&gt;tk.DISABLED)&lt;/span&gt;
&lt;span id="cb1-45"&gt; &lt;span&gt;if&lt;/span&gt; &lt;span&gt;self&lt;/span&gt;.listener:&lt;/span&gt;
&lt;span id="cb1-46"&gt; &lt;span&gt;self&lt;/span&gt;.listener.stop()&lt;/span&gt;
&lt;span id="cb1-47"&gt;&lt;/span&gt;
&lt;span id="cb1-48"&gt; &lt;span&gt;def&lt;/span&gt; on_press(&lt;span&gt;self&lt;/span&gt;, key):&lt;/span&gt;
&lt;span id="cb1-49"&gt; &lt;span&gt;if&lt;/span&gt; &lt;span&gt;self&lt;/span&gt;.is_counting:&lt;/span&gt;
&lt;span id="cb1-50"&gt; &lt;span&gt;self&lt;/span&gt;.count &lt;span&gt;+=&lt;/span&gt; &lt;span&gt;1&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-51"&gt; &lt;span&gt;self&lt;/span&gt;.label.config(text&lt;span&gt;=&lt;/span&gt;&lt;span&gt;f"Count: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;&lt;span&gt;self&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;count&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-52"&gt;&lt;/span&gt;
&lt;span id="cb1-53"&gt; &lt;span&gt;def&lt;/span&gt; save_count(&lt;span&gt;self&lt;/span&gt;):&lt;/span&gt;
&lt;span id="cb1-54"&gt; today &lt;span&gt;=&lt;/span&gt; date.today().isoformat()&lt;/span&gt;
&lt;span id="cb1-55"&gt; data &lt;span&gt;=&lt;/span&gt; [today, &lt;span&gt;self&lt;/span&gt;.count]&lt;/span&gt;
&lt;span id="cb1-56"&gt; &lt;/span&gt;
&lt;span id="cb1-57"&gt; file_exists &lt;span&gt;=&lt;/span&gt; os.path.isfile(&lt;span&gt;self&lt;/span&gt;.csv_file)&lt;/span&gt;
&lt;span id="cb1-58"&gt; &lt;/span&gt;
&lt;span id="cb1-59"&gt; &lt;span&gt;with&lt;/span&gt; &lt;span&gt;open&lt;/span&gt;(&lt;span&gt;self&lt;/span&gt;.csv_file, &lt;span&gt;'a'&lt;/span&gt;, newline&lt;span&gt;=&lt;/span&gt;&lt;span&gt;''&lt;/span&gt;) &lt;span&gt;as&lt;/span&gt; f:&lt;/span&gt;
&lt;span id="cb1-60"&gt; writer &lt;span&gt;=&lt;/span&gt; csv.writer(f)&lt;/span&gt;
&lt;span id="cb1-61"&gt; &lt;span&gt;if&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; file_exists:&lt;/span&gt;
&lt;span id="cb1-62"&gt; writer.writerow([&lt;span&gt;"Date"&lt;/span&gt;, &lt;span&gt;"Count"&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb1-63"&gt; writer.writerow(data)&lt;/span&gt;
&lt;span id="cb1-64"&gt; &lt;/span&gt;
&lt;span id="cb1-65"&gt; &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"Data saved: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;data&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb1-66"&gt;&lt;/span&gt;
&lt;span id="cb1-67"&gt;root &lt;span&gt;=&lt;/span&gt; tk.Tk()&lt;/span&gt;
&lt;span id="cb1-68"&gt;app &lt;span&gt;=&lt;/span&gt; TypingCounter(root)&lt;/span&gt;
&lt;span id="cb1-69"&gt;root.mainloop()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;이 코드는 다음과 같은 기능을 제공합니다:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;시작 버튼: 타이핑 카운트를 시작합니다.&lt;/li&gt;
&lt;li&gt;종료 버튼: 타이핑 카운트를 중지합니다.&lt;/li&gt;
&lt;li&gt;저장 버튼: 현재 날짜와 타이핑 횟수를 CSV 파일에 저장합니다.&lt;/li&gt;
&lt;li&gt;종료 버튼: 프로그램을 종료합니다.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CSV 파일 (‘typing_count.csv’)은 스크립트가 있는 폴더에 생성되며, 이미 파일이 존재하면 새로운 데이터를 추가합니다. 파일이 없으면 새로 생성하고 헤더를 추가합니다.이 프로그램은 사용자가 시작 버튼을 누를 때부터 타이핑 횟수를 세기 시작하고, 종료 버튼을 누르면 카운팅을 중지합니다. 저장 버튼을 누르면 현재 날짜와 카운트를 CSV 파일에 저장합니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;1.2&lt;/span&gt; 심박수 기반의 칼로리 소모량 측정 공식&lt;/h2&gt;

&lt;p&gt;타이핑 중 소모되는 칼로리를 측정하기 위해 심박수를 기준으로 삼았습니다. 일반적으로 사용되는 공식은 다음과 같습니다:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%2520%28kcal%2Fmin%29%257D%2520%3D%2520%255Cfrac%257B%255Ctext%257B%25EC%258B%25AC%25EB%25B0%2595%25EC%2588%2598%2520%28bpm%29%257D%2520%255Ctimes%2520%255Ctext%257B%25EC%25B2%25B4%25EC%25A4%2591%2520%28kg%29%257D%2520%255Ctimes%25200.6309%257D%257B1000%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%2520%28kcal%2Fmin%29%257D%2520%3D%2520%255Cfrac%257B%255Ctext%257B%25EC%258B%25AC%25EB%25B0%2595%25EC%2588%2598%2520%28bpm%29%257D%2520%255Ctimes%2520%255Ctext%257B%25EC%25B2%25B4%25EC%25A4%2591%2520%28kg%29%257D%2520%255Ctimes%25200.6309%257D%257B1000%257D%250A" width="535" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A:&lt;/strong&gt; 안정시 심박수에서 소모된 칼로리&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;B:&lt;/strong&gt; 타이핑 중 심박수에 의해 소모되는 칼로리&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
&lt;span&gt;1.3&lt;/span&gt; Apple Watch로 심박수 측정&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;안정시 심박수: &lt;strong&gt;88 bpm&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;타이핑 중 심박수: &lt;strong&gt;97 bpm&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
&lt;span&gt;1.4&lt;/span&gt; 1분 타이핑으로 소모된 칼로리 계산&lt;/h2&gt;

&lt;p&gt;체중은 65kg으로 가정합니다.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A: 안정시 심박수에서 소모된 칼로리 &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%257D%2520%3D%2520%255Cfrac%257B88%2520%255Ctimes%252065%2520%255Ctimes%25200.6309%257D%257B1000%257D%2520%255Capprox%25203.6087%2520%255Ctext%257B%2520Kcal%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%257D%2520%3D%2520%255Cfrac%257B88%2520%255Ctimes%252065%2520%255Ctimes%25200.6309%257D%257B1000%257D%2520%255Capprox%25203.6087%2520%255Ctext%257B%2520Kcal%257D%250A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;B: 타이핑 중 소모된 칼로리 &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%257D%2520%3D%2520%255Cfrac%257B97%2520%255Ctimes%252065%2520%255Ctimes%25200.6309%257D%257B1000%257D%2520%255Capprox%25203.9778%2520%255Ctext%257B%2520Kcal%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%2586%258C%25EB%25AA%25A8%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%257D%2520%3D%2520%255Cfrac%257B97%2520%255Ctimes%252065%2520%255Ctimes%25200.6309%257D%257B1000%257D%2520%255Capprox%25203.9778%2520%255Ctext%257B%2520Kcal%257D%250A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AB: 약 1분 동안의 차이는 &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A3.9778%2520-%25203.6087%2520%255Capprox%25200.3691%2520%255Ctext%257B%2520Kcal%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A3.9778%2520-%25203.6087%2520%255Capprox%25200.3691%2520%255Ctext%257B%2520Kcal%257D%250A" width="238" height="13"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;한 번의 타이핑으로 소모되는 칼로리는 &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Cfrac%257B0.3691%257D%257B222%257D%2520%255Capprox%25200.0016%2520%255Ctext%257B%2520Kcal%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Cfrac%257B0.3691%257D%257B222%257D%2520%255Capprox%25200.0016%2520%255Ctext%257B%2520Kcal%257D%250A" width="168" height="37"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;결과적으로 한 번의 타이핑으로 약 &lt;strong&gt;0.0016 Kcal&lt;/strong&gt;를 소비하는 것으로 나타났습니다.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;2&lt;/span&gt; 하루 평균 얼마나 타이핑 하는지 측정&lt;/h1&gt;

&lt;p&gt;앞서 만든 프로그램으로 연말 기간동안 하루의 타이핑량을 측정해서 CSV 파일로 저장했습니다. 아래는 그 결과를 불러와서 시각화하는 코드입니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb2-1"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb2-2"&gt;&lt;span&gt;import&lt;/span&gt; pandas &lt;span&gt;as&lt;/span&gt; pd&lt;/span&gt;
&lt;span id="cb2-3"&gt;&lt;span&gt;import&lt;/span&gt; seaborn &lt;span&gt;as&lt;/span&gt; sns&lt;/span&gt;
&lt;span id="cb2-4"&gt;&lt;/span&gt;
&lt;span id="cb2-5"&gt;df &lt;span&gt;=&lt;/span&gt; pd.read_csv(&lt;span&gt;"../typing_count.csv"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb2-6"&gt;&lt;span&gt;# 날짜 열을 datetime 형식으로 변환&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-7"&gt;df[&lt;span&gt;'Date'&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; pd.to_datetime(df[&lt;span&gt;'Date'&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb2-8"&gt;df.head()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Date&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;2024-12-12&lt;/td&gt;
&lt;td&gt;14246&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2024-12-13&lt;/td&gt;
&lt;td&gt;19144&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2024-12-14&lt;/td&gt;
&lt;td&gt;18096&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;2024-12-15&lt;/td&gt;
&lt;td&gt;24999&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;2024-12-16&lt;/td&gt;
&lt;td&gt;23141&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;br&gt;



&lt;h2&gt;
&lt;span&gt;2.1&lt;/span&gt; 평균 타이핑 시각화&lt;/h2&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb3-1"&gt;&lt;span&gt;# 전체 평균값 계산&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-2"&gt;overall_mean &lt;span&gt;=&lt;/span&gt; df[&lt;span&gt;'Count'&lt;/span&gt;].mean()&lt;/span&gt;
&lt;span id="cb3-3"&gt;&lt;/span&gt;
&lt;span id="cb3-4"&gt;&lt;span&gt;# 서브플롯 생성 (비율 조정)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-5"&gt;fig &lt;span&gt;=&lt;/span&gt; plt.figure(figsize&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;8&lt;/span&gt;, &lt;span&gt;3&lt;/span&gt;))&lt;/span&gt;
&lt;span id="cb3-6"&gt;gs &lt;span&gt;=&lt;/span&gt; fig.add_gridspec(&lt;span&gt;1&lt;/span&gt;, &lt;span&gt;5&lt;/span&gt;) &lt;span&gt;# 4:1 비율로 그리드 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-7"&gt;&lt;/span&gt;
&lt;span id="cb3-8"&gt;&lt;span&gt;# 첫 번째 플롯: 선 그래프 (4칸 차지)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-9"&gt;ax1 &lt;span&gt;=&lt;/span&gt; fig.add_subplot(gs[&lt;span&gt;0&lt;/span&gt;, :&lt;span&gt;4&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb3-10"&gt;ax1.plot(df[&lt;span&gt;'Date'&lt;/span&gt;], df[&lt;span&gt;'Count'&lt;/span&gt;], marker&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'o'&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-11"&gt;ax1.spines[&lt;span&gt;'top'&lt;/span&gt;].set_visible(&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-12"&gt;ax1.spines[&lt;span&gt;'right'&lt;/span&gt;].set_visible(&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-13"&gt;ax1.set_xlabel(&lt;span&gt;''&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-14"&gt;ax1.set_ylabel(&lt;span&gt;'Count'&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-15"&gt;ax1.grid(&lt;span&gt;True&lt;/span&gt;, linestyle&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'--'&lt;/span&gt;, alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.7&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-16"&gt;fig.autofmt_xdate()&lt;/span&gt;
&lt;span id="cb3-17"&gt;ax1.set_ylim(bottom&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-18"&gt;&lt;span&gt;for&lt;/span&gt; i, count &lt;span&gt;in&lt;/span&gt; &lt;span&gt;enumerate&lt;/span&gt;(df[&lt;span&gt;'Count'&lt;/span&gt;]):&lt;/span&gt;
&lt;span id="cb3-19"&gt; ax1.annotate(&lt;span&gt;str&lt;/span&gt;(count), (df[&lt;span&gt;'Date'&lt;/span&gt;][i], count), textcoords&lt;span&gt;=&lt;/span&gt;&lt;span&gt;"offset points"&lt;/span&gt;, xytext&lt;span&gt;=&lt;/span&gt;(&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;7&lt;/span&gt;), ha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'center'&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-20"&gt;&lt;/span&gt;
&lt;span id="cb3-21"&gt;&lt;span&gt;# 두 번째 플롯: 스웜 플롯 (1칸 차지)&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-22"&gt;ax2 &lt;span&gt;=&lt;/span&gt; fig.add_subplot(gs[&lt;span&gt;0&lt;/span&gt;, &lt;span&gt;4&lt;/span&gt;])&lt;/span&gt;
&lt;span id="cb3-23"&gt;sns.swarmplot(x&lt;span&gt;=&lt;/span&gt;[&lt;span&gt;'All Dates'&lt;/span&gt;] &lt;span&gt;*&lt;/span&gt; &lt;span&gt;len&lt;/span&gt;(df), y&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'Count'&lt;/span&gt;, data&lt;span&gt;=&lt;/span&gt;df, ax&lt;span&gt;=&lt;/span&gt;ax2)&lt;/span&gt;
&lt;span id="cb3-24"&gt;ax2.spines[&lt;span&gt;'top'&lt;/span&gt;].set_visible(&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-25"&gt;ax2.spines[&lt;span&gt;'right'&lt;/span&gt;].set_visible(&lt;span&gt;False&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-26"&gt;ax2.axhline(y&lt;span&gt;=&lt;/span&gt;overall_mean, color&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'red'&lt;/span&gt;, linestyle&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'--'&lt;/span&gt;, label&lt;span&gt;=&lt;/span&gt;&lt;span&gt;f'Mean: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;overall_mean&lt;span&gt;:.2f}&lt;/span&gt;&lt;span&gt;'&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-27"&gt;ax2.set_xlabel(&lt;span&gt;''&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-28"&gt;ax2.set_ylabel(&lt;span&gt;'Count'&lt;/span&gt;, fontsize&lt;span&gt;=&lt;/span&gt;&lt;span&gt;12&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-29"&gt;ax2.grid(&lt;span&gt;True&lt;/span&gt;, linestyle&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'--'&lt;/span&gt;, alpha&lt;span&gt;=&lt;/span&gt;&lt;span&gt;0.7&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb3-30"&gt;&lt;/span&gt;
&lt;span id="cb3-31"&gt;&lt;span&gt;# 레이아웃 조정 및 출력&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-32"&gt;plt.tight_layout()&lt;/span&gt;
&lt;span id="cb3-33"&gt;plt.show()&lt;/span&gt;
&lt;span id="cb3-34"&gt;&lt;/span&gt;
&lt;span id="cb3-35"&gt;&lt;span&gt;# 평균값 출력&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb3-36"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"Overall Mean: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;overall_mean&lt;span&gt;:.2f}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;




&lt;p&gt;&lt;a href="typecount_files/figure-html/cell-3-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2Ftypecount_files%2Ffigure-html%2Fcell-3-output-1.png" width="789" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Overall Mean: 18511.13&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;위 결과를 통해 하루에 타이핑으로 소비되는 총 칼로리를 계산 할 수 있습니다.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;3&lt;/span&gt; 하루 타이핑 총 소모 칼로리 계산&lt;/h1&gt;

&lt;p&gt;하루 평균 타이핑 횟수가 &lt;strong&gt;18,511회&lt;/strong&gt; 임으로&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%259D%25BC%25EC%259D%25BC%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%2520%25EC%2586%258C%25EB%25B9%2584%25EB%259F%2589%257D%2520%3D%252018%2C511%2520%255Ctimes%25200.0016%2520%255Capprox%252029.6176%2520%255Ctext%257B%2520Kcal%257D%250A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flatex.codecogs.com%2Fpng.latex%3F%250A%255Ctext%257B%25EC%259D%25BC%25EC%259D%25BC%2520%25EC%25B9%25BC%25EB%25A1%259C%25EB%25A6%25AC%2520%25EC%2586%258C%25EB%25B9%2584%25EB%259F%2589%257D%2520%3D%252018%2C511%2520%255Ctimes%25200.0016%2520%255Capprox%252029.6176%2520%255Ctext%257B%2520Kcal%257D%250A" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;타이핑으로 하루 동안 소모되는 칼로리는 약 &lt;strong&gt;29.6 Kcal&lt;/strong&gt;입니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;3.1&lt;/span&gt; 결론&lt;/h2&gt;

&lt;p&gt;29.6 Kcal는 제 예상보다는 높은 수치지만 초콜릿 한 조각에 해당하는 열량이라고 합니다. 만약 운동으로 이 정도의 칼로리를 소모하려면 걷기 10분 정도가 필요하다고 합니다. 역시 앉아서 타이핑만 하지 말고 좀 걸어야 겠습니다.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;4&lt;/span&gt; Reference&lt;/h1&gt;

&lt;p&gt;[^1] : https://qiita.com/mercy-333/items/cf2e0f0b040926184004&lt;/p&gt;



</description>
      <category>python</category>
      <category>visualization</category>
      <category>health</category>
      <category>fitness</category>
    </item>
    <item>
      <title>Calplot: 파이썬으로 만드는 멋진 캘린더 히트맵</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Sat, 18 Jan 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/ehottl/calplot-paisseoneuro-mandeuneun-meosjin-kaelrindeo-hiteumaeb-1628</link>
      <guid>https://dev.to/ehottl/calplot-paisseoneuro-mandeuneun-meosjin-kaelrindeo-hiteumaeb-1628</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/tomkwok/calplot" rel="noopener noreferrer"&gt;calplot&lt;/a&gt;은 파이썬에서 시계열 데이터를 시각적으로 표현할 수 있는 라이브러리입니다. 이 라이브러리를 사용하면 GitHub의 기여도 그래프와 유사한 캘린더 형태의 히트맵을 쉽게 만들 수 있습니다. 이번 포스팅에서는 &lt;a href="https://meteostat.net" rel="noopener noreferrer"&gt;Meteostat&lt;/a&gt; 라이브러리를 사용하여 날씨 데이터를 가져오고 캘린더 형태로 시각화하는 방법을 소개합니다. 라이브러리를 통해 연도별 데이터를 직관적으로 표현합니다. 특히, 평균 기온과 일교차를 시각화하는 과정을 다룹니다.&lt;/p&gt;

&lt;h1&gt;
&lt;span&gt;1&lt;/span&gt; 사용한 기술 및 라이브러리&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Meteostat: 기상 데이터를 편리하게 가져오는 라이브러리로, 특정 지역의 기상 관측소 데이터를 활용합니다.&lt;/li&gt;
&lt;li&gt;Calplot: 캘린더 형식으로 데이터를 시각화할 수 있는 강력한 도구입니다.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
&lt;span&gt;2&lt;/span&gt; 초기 설정&lt;/h1&gt;

&lt;p&gt;필요한 상수와 폰트를 설정해야 합니다. 서울 날씨를 가져오기 위해서는 GPS 좌표를 상수로 제공해야 하고 한글 폰트를 설정해 그래프에서 한글이 깨지지 않도록 합니다. 날씨 데이터는 2020년부터 2024년까지 4년치를 불러오도록 하겠습니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;2.1&lt;/span&gt; 초기 설정&lt;/h2&gt;

&lt;p&gt;먼저, 필요한 상수와 한글 폰트를 설정합니다. 서울의 GPS 좌표를 기반으로 데이터를 가져오며, 한글 폰트를 설정해 그래프에서 한글이 깨지지 않도록 합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb1-1"&gt;&lt;span&gt;from&lt;/span&gt; datetime &lt;span&gt;import&lt;/span&gt; datetime&lt;/span&gt;
&lt;span id="cb1-2"&gt;&lt;/span&gt;
&lt;span id="cb1-3"&gt;&lt;span&gt;import&lt;/span&gt; calplot&lt;/span&gt;
&lt;span id="cb1-4"&gt;&lt;span&gt;import&lt;/span&gt; matplotlib.pyplot &lt;span&gt;as&lt;/span&gt; plt&lt;/span&gt;
&lt;span id="cb1-5"&gt;&lt;span&gt;import&lt;/span&gt; pandas &lt;span&gt;as&lt;/span&gt; pd&lt;/span&gt;
&lt;span id="cb1-6"&gt;&lt;span&gt;from&lt;/span&gt; meteostat &lt;span&gt;import&lt;/span&gt; Daily, Point, Stations&lt;/span&gt;
&lt;span id="cb1-7"&gt;&lt;/span&gt;
&lt;span id="cb1-8"&gt;&lt;span&gt;# Constants&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-9"&gt;NAME: &lt;span&gt;str&lt;/span&gt; &lt;span&gt;=&lt;/span&gt; &lt;span&gt;"서울"&lt;/span&gt; &lt;span&gt;# 지역 이름&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-10"&gt;GPS: &lt;span&gt;tuple&lt;/span&gt;[&lt;span&gt;float&lt;/span&gt;, &lt;span&gt;float&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; (&lt;span&gt;37.5667&lt;/span&gt;, &lt;span&gt;126.9667&lt;/span&gt;) &lt;span&gt;# GPS 좌표&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-11"&gt;START: datetime &lt;span&gt;=&lt;/span&gt; datetime(&lt;span&gt;2020&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;, &lt;span&gt;1&lt;/span&gt;) &lt;span&gt;# 조회 시작&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-12"&gt;END: datetime &lt;span&gt;=&lt;/span&gt; datetime(&lt;span&gt;2024&lt;/span&gt;, &lt;span&gt;12&lt;/span&gt;, &lt;span&gt;31&lt;/span&gt;) &lt;span&gt;# 조회 끝&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-13"&gt;&lt;/span&gt;
&lt;span id="cb1-14"&gt;&lt;span&gt;# 한글 폰트 설정&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-15"&gt;plt.rcParams[&lt;span&gt;'font.family'&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; &lt;span&gt;'Pretendard Variable'&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb1-16"&gt;plt.rcParams[&lt;span&gt;'axes.unicode_minus'&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; &lt;span&gt;False&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;h1&gt;
&lt;span&gt;3&lt;/span&gt; 기상 관측소 데이터와서 편집하기&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/meteostat/meteostat-python" rel="noopener noreferrer"&gt;Meteostat&lt;/a&gt; 라이브러리를 사용하여 서울 근처 기상 관측소 데이터를 선택합니다. 날씨 데이터를 가져온 뒤에는 일교차(최고 기온(tmax)과 최저 기온(tmin)의 차이), 눈/비 여부(강수량(prcp) 또는 적설량(snow)이 있는 경우 1, 없는 경우 NaN)에 대한 데이터를 열을 추가해줍니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb2-1"&gt;stations: Stations &lt;span&gt;=&lt;/span&gt; Stations()&lt;/span&gt;
&lt;span id="cb2-2"&gt;&lt;/span&gt;
&lt;span id="cb2-3"&gt;&lt;span&gt;# Get nearby weather stations based on latitude and longitude&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-4"&gt;nearby_stations: Stations &lt;span&gt;=&lt;/span&gt; stations.nearby(GPS[&lt;span&gt;0&lt;/span&gt;], GPS[&lt;span&gt;1&lt;/span&gt;]) &lt;span&gt;# GPS 튜플 언패킹&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-5"&gt;&lt;/span&gt;
&lt;span id="cb2-6"&gt;&lt;span&gt;# Fetch the first station's data&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-7"&gt;station_data: pd.DataFrame &lt;span&gt;=&lt;/span&gt; nearby_stations.fetch(&lt;span&gt;1&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb2-8"&gt;&lt;/span&gt;
&lt;span id="cb2-9"&gt;&lt;span&gt;# Print station information&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-10"&gt;&lt;span&gt;print&lt;/span&gt;(&lt;span&gt;f"선택된 관측소: &lt;/span&gt;&lt;span&gt;{&lt;/span&gt;station_data[&lt;span&gt;'name'&lt;/span&gt;]&lt;span&gt;.&lt;/span&gt;values[&lt;span&gt;0&lt;/span&gt;]&lt;span&gt;}&lt;/span&gt;&lt;span&gt;"&lt;/span&gt;)&lt;/span&gt;
&lt;span id="cb2-11"&gt;&lt;/span&gt;
&lt;span id="cb2-12"&gt;&lt;span&gt;# Use the coordinates of the selected station&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-13"&gt;location: Point &lt;span&gt;=&lt;/span&gt; Point(&lt;/span&gt;
&lt;span id="cb2-14"&gt; station_data[&lt;span&gt;'latitude'&lt;/span&gt;].values[&lt;span&gt;0&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb2-15"&gt; station_data[&lt;span&gt;'longitude'&lt;/span&gt;].values[&lt;span&gt;0&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb2-16"&gt;)&lt;/span&gt;
&lt;span id="cb2-17"&gt;&lt;/span&gt;
&lt;span id="cb2-18"&gt;&lt;span&gt;# Fetch weather data&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-19"&gt;weather_data: pd.DataFrame &lt;span&gt;=&lt;/span&gt; Daily(location, start&lt;span&gt;=&lt;/span&gt;START, end&lt;span&gt;=&lt;/span&gt;END).fetch()&lt;/span&gt;
&lt;span id="cb2-20"&gt;&lt;/span&gt;
&lt;span id="cb2-21"&gt;&lt;span&gt;# 일교차 계산&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-22"&gt;weather_data[&lt;span&gt;'diurnal_range'&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; weather_data[&lt;span&gt;'tmax'&lt;/span&gt;] &lt;span&gt;-&lt;/span&gt; weather_data[&lt;span&gt;'tmin'&lt;/span&gt;]&lt;/span&gt;
&lt;span id="cb2-23"&gt;&lt;/span&gt;
&lt;span id="cb2-24"&gt;&lt;span&gt;# 눈이나 비가 온 날은 1로, 오지 않은 날은 NaN으로 표기하는 새로운 열 추가&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-25"&gt;weather_data[&lt;span&gt;'rain_or_snow'&lt;/span&gt;] &lt;span&gt;=&lt;/span&gt; weather_data.&lt;span&gt;apply&lt;/span&gt;(&lt;/span&gt;
&lt;span id="cb2-26"&gt; &lt;span&gt;lambda&lt;/span&gt; row: &lt;span&gt;1&lt;/span&gt; &lt;span&gt;if&lt;/span&gt; (row[&lt;span&gt;'prcp'&lt;/span&gt;] &lt;span&gt;&amp;gt;&lt;/span&gt; &lt;span&gt;0&lt;/span&gt; &lt;span&gt;or&lt;/span&gt; row[&lt;span&gt;'snow'&lt;/span&gt;] &lt;span&gt;&amp;gt;&lt;/span&gt; &lt;span&gt;0&lt;/span&gt;) &lt;span&gt;else&lt;/span&gt; &lt;span&gt;float&lt;/span&gt;(&lt;span&gt;'nan'&lt;/span&gt;), axis&lt;span&gt;=&lt;/span&gt;&lt;span&gt;1&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-27"&gt;)&lt;/span&gt;
&lt;span id="cb2-28"&gt;&lt;/span&gt;
&lt;span id="cb2-29"&gt;&lt;span&gt;# Display the last few rows of the data&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb2-30"&gt;weather_data.tail()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;pre&gt;&lt;code&gt;선택된 관측소: Seoul&lt;/code&gt;&lt;/pre&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;tavg&lt;/th&gt;
&lt;th&gt;tmin&lt;/th&gt;
&lt;th&gt;tmax&lt;/th&gt;
&lt;th&gt;prcp&lt;/th&gt;
&lt;th&gt;snow&lt;/th&gt;
&lt;th&gt;wdir&lt;/th&gt;
&lt;th&gt;wspd&lt;/th&gt;
&lt;th&gt;wpgt&lt;/th&gt;
&lt;th&gt;pres&lt;/th&gt;
&lt;th&gt;tsun&lt;/th&gt;
&lt;th&gt;diurnal_range&lt;/th&gt;
&lt;th&gt;rain_or_snow&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;th&gt;time&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2024-12-27&lt;/td&gt;
&lt;td&gt;-2.6&lt;/td&gt;
&lt;td&gt;-5.5&lt;/td&gt;
&lt;td&gt;-0.7&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;283.0&lt;/td&gt;
&lt;td&gt;8.7&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;1026.6&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2024-12-28&lt;/td&gt;
&lt;td&gt;-2.1&lt;/td&gt;
&lt;td&gt;-6.7&lt;/td&gt;
&lt;td&gt;-1.4&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;284.0&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;1024.3&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;5.3&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2024-12-29&lt;/td&gt;
&lt;td&gt;2.6&lt;/td&gt;
&lt;td&gt;-4.2&lt;/td&gt;
&lt;td&gt;4.1&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;70.0&lt;/td&gt;
&lt;td&gt;5.1&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;1024.9&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2024-12-30&lt;/td&gt;
&lt;td&gt;4.8&lt;/td&gt;
&lt;td&gt;1.9&lt;/td&gt;
&lt;td&gt;9.3&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;351.0&lt;/td&gt;
&lt;td&gt;8.6&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;1018.7&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;7.4&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2024-12-31&lt;/td&gt;
&lt;td&gt;0.1&lt;/td&gt;
&lt;td&gt;-1.4&lt;/td&gt;
&lt;td&gt;4.4&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;281.0&lt;/td&gt;
&lt;td&gt;9.9&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;1020.6&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;td&gt;5.8&lt;/td&gt;
&lt;td&gt;NaN&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;br&gt;
&lt;br&gt;


&lt;h1&gt;
&lt;span&gt;4&lt;/span&gt; 시각화 하기&lt;/h1&gt;


&lt;h2&gt;
&lt;span&gt;4.1&lt;/span&gt; 평균 기온 캘린더 플롯&lt;/h2&gt;
&lt;p&gt;calplot을 사용해 연도별 평균 기온을 캘린더 형태로 시각화합니다. 색상 맵은 coolwarm으로 설정해 표현합니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb4-1"&gt;&lt;span&gt;# 데이터 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb4-2"&gt;calplot.calplot(weather_data[&lt;span&gt;'tavg'&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb4-3"&gt; cmap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'coolwarm'&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-4"&gt; yearascending&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-5"&gt; yearlabel_kws&lt;span&gt;=&lt;/span&gt;{&lt;span&gt;'fontsize'&lt;/span&gt;: &lt;span&gt;16&lt;/span&gt;},&lt;/span&gt;
&lt;span id="cb4-6"&gt; suptitle&lt;span&gt;=&lt;/span&gt;&lt;span&gt;f'&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;NAME&lt;span&gt;}&lt;/span&gt;&lt;span&gt; 평균 기온'&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb4-7"&gt; suptitle_kws&lt;span&gt;=&lt;/span&gt;{&lt;span&gt;'fontsize'&lt;/span&gt;: &lt;span&gt;20&lt;/span&gt;, &lt;span&gt;'y'&lt;/span&gt;: &lt;span&gt;1.05&lt;/span&gt;})&lt;/span&gt;
&lt;span id="cb4-8"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;




&lt;p&gt;&lt;a href="Python_calplot_files/figure-html/cell-4-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FPython_calplot_files%2Ffigure-html%2Fcell-4-output-1.png" width="800" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;위 시각화 결과를 보면 2021년 1월달의 평균 기온이 유난히 낮았다는 것과 2024년 12월의 평균기온이 상당히 높다는 것을 알 수 있습니다. 그리고 여름이 계속 더워지고 있다는 추세가 보이는 것 같습니다.&lt;/p&gt;

&lt;h2&gt;
&lt;span&gt;4.2&lt;/span&gt; 일교차 캘린더 플롯&lt;/h2&gt;

&lt;p&gt;일반적으로 일교차는 봄과 가을이 심하다고 알고있는데 실제로 그런지 확인해보겠습니다. 일교차 데이터를 캘린더 플롯으로 표현하고 색상 맵은 YlGn으로 설정해 녹색 계열로 변화를 나타냅니다.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&lt;span id="cb5-1"&gt;&lt;span&gt;# 데이터 시각화&lt;/span&gt;&lt;/span&gt;
&lt;span id="cb5-2"&gt;calplot.calplot(weather_data[&lt;span&gt;'diurnal_range'&lt;/span&gt;],&lt;/span&gt;
&lt;span id="cb5-3"&gt; cmap&lt;span&gt;=&lt;/span&gt;&lt;span&gt;'YlGn'&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb5-4"&gt; yearascending&lt;span&gt;=&lt;/span&gt;&lt;span&gt;True&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb5-5"&gt; yearlabel_kws&lt;span&gt;=&lt;/span&gt;{&lt;span&gt;'fontsize'&lt;/span&gt;: &lt;span&gt;16&lt;/span&gt;},&lt;/span&gt;
&lt;span id="cb5-6"&gt; suptitle&lt;span&gt;=&lt;/span&gt;&lt;span&gt;f'&lt;/span&gt;&lt;span&gt;{&lt;/span&gt;NAME&lt;span&gt;}&lt;/span&gt;&lt;span&gt; 일교차'&lt;/span&gt;,&lt;/span&gt;
&lt;span id="cb5-7"&gt; suptitle_kws&lt;span&gt;=&lt;/span&gt;{&lt;span&gt;'fontsize'&lt;/span&gt;: &lt;span&gt;20&lt;/span&gt;, &lt;span&gt;'y'&lt;/span&gt;: &lt;span&gt;1.05&lt;/span&gt;})&lt;/span&gt;
&lt;span id="cb5-8"&gt;plt.show()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;



&lt;p&gt;&lt;a href="Python_calplot_files/figure-html/cell-5-output-1.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftomorrow-lab.github.io%2Fposts%2Fipynb%2FPython_calplot_files%2Ffigure-html%2Fcell-5-output-1.png" width="800" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;일교차는 더운 기간(7-9월) 제외하고는 패턴을 찾기 힘들어 보입니다. 그리고 가을 보다 봄(3-4월)이 확실히 일교차가 심한 것을 알 수 있네요.&lt;/p&gt;



&lt;h1&gt;
&lt;span&gt;5&lt;/span&gt; 마치며&lt;/h1&gt;

&lt;p&gt;이 글에서는 calplot을 활용해 서울의 날씨 데이터를 효과적으로 시각화하는 방법을 보여줬습니다. calplot은 날씨 데이터 이외에도 건강 및 피트니스 데이터, 생산성 및 업무 관리 데이터, 환경 모니터링 데이터등의 시계열 데이터를 효과적으로 시각해서 데이터의 패턴과 추세를 직관적으로 파악하는 데 큰 도움을 줍니다. 그러면 데이터 분석과 의사 결정에 큰 도움이 될 것입니다. 여러분의 분야에서도 calplot을 활용해 새로운 통찰을 발견해보세요.&lt;/p&gt;



</description>
      <category>python</category>
      <category>visualization</category>
      <category>calplot</category>
    </item>
    <item>
      <title>hello world</title>
      <dc:creator>RabbitQ</dc:creator>
      <pubDate>Sun, 19 Aug 2018 12:26:42 +0000</pubDate>
      <link>https://dev.to/ehottl/hello-world-3d8p</link>
      <guid>https://dev.to/ehottl/hello-world-3d8p</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p2crtvnm8hhtiqo2xfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8p2crtvnm8hhtiqo2xfy.png" width="370" height="262"&gt;&lt;/a&gt;&lt;br&gt;
This is the test post.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
