DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Edited on

Places365 in PyTorch

Buy Me a Coffee

*My post explains Places365.

Places365() can use Places365 dataset as shown below:

*Memos:

  • The 1st argument is root(Required-Type:str or pathlib.Path). *An absolute or relative path is possible.
  • The 2nd argument is split(Optional-Default:"train-standard"-Type:str). *"train-standard"(1,803,460 images), "train-challenge"(8,026,628 images) or "val"(36,500 images) can be set to it. "test"(328,500 images) isn't supported so I requested the feature on GitHub.
  • The 3rd argument is small(Optional-Default:False-Type:bool).
  • The 4th argument is download(Optional-Default:False-Type:bool): *Memos:
    • If it's True, the dataset is downloaded from the internet and extracted(unzipped) to root.
    • If it's True and the dataset is already downloaded, it's extracted.
    • If it's True and the dataset is already downloaded and extracted, error occurs because the extracted folders exist. *Deleting the extracted folders doesn't get error.
    • It should be False if the dataset is already downloaded and extracted not to get error.
    • From here:
      • for split="train-standard" and small=False, you can manually download and extract the dataset filelist_places365-standard.tar and train_large_places365standard.tar to data/ and data/data_large_standard/ respectively.
      • for split="train-standard" and small=True, you can manually download and extract the dataset filelist_places365-standard.tar and train_256_places365standard.tar to data/ and data/data_256_standard/ respectively.
      • for split="train-challenge" and small=False, you can manually download and extract the dataset filelist_places365-challenge.tar and train_large_places365challenge.tar to data/ and data/data_large/ respectively.
      • for split="train-challenge" and small=True, you can manually download and extract the dataset filelist_places365-challenge.tar and train_256_places365challenge.tar to data/ and data/data_256_challenge/ respectively.
      • for split="val" and small=False, you can manually download and extract the dataset filelist_places365-standard.tar and val_large.tar to data/ and data/val_large/ respectively.
      • for split="val" and small=True, you can manually download and extract the dataset filelist_places365-standard.tar and val_large.tar to data/ and data/val_256/ respectively.
  • The 5th argument is transform(Optional-Default:None-Type:callable).
  • The 6th argument is target_transform(Optional-Default:None-Type:callable).
  • The 7th argument is loader(Optional-Default:torchvision.datasets.folder.default_loader-Type:callable).
  • About the label from the classes for the "train-standard" image indices, airfield(0) is 0~4999, airplane_cabin(1) is 5000~9999, airport_terminal(2) is 10000~14999, alcove(3) is 15000~19999, alley(4) is 20000~24999, amphitheater(5) is 25000~29999, amusement_arcade(6) is 30000~34999, amusement_park(7) is 35000~39999, apartment_building/outdoor(8) is 40000~44999, aquarium(9) is 45000~49999, etc.
  • About the label from the classes for the "train-challenge" image indices, airfield(0) is 0~38566, airplane_cabin(1) is 38567~47890, airport_terminal(2) is 47891~74901, alcove(3) is 74902~98482, alley(4) is 98483~137662, amphitheater(5) is 137663~150034, amusement_arcade(6) is 150035~161051, amusement_park(7) is 161052~201051, apartment_building/outdoor(8) is 201052~227872, aquarium(9) is 227873~267872, etc.
from torchvision.datasets import Places365
from torchvision.datasets.folder import default_loader

trainstd_large_data = Places365(
    root="data"
)

trainstd_large_data = Places365(
    root="data",
    split="train-standard",
    small=False,
    download=False,
    transform=None,
    target_transform=None,
    loader=default_loader
)

trainstd_small_data = Places365(
    root="data",
    split="train-standard",
    small=True
)

trainchal_large_data = Places365(
    root="data",
    split="train-challenge",
    small=False
)

trainchal_small_data = Places365(
    root="data",
    split="train-challenge",
    small=True
)

val_large_data = Places365(
    root="data",
    split="val",
    small=False
)

val_small_data = Places365(
    root="data",
    split="val",
    small=True
)

len(trainstd_large_data), len(trainstd_small_data)
# (1803460, 1803460)

len(trainchal_large_data), len(trainchal_small_data)
# (8026628, 8026628)

len(val_large_data), len(val_small_data)
# (36500, 36500)

trainstd_large_data
# Dataset Places365
#     Number of datapoints: 1803460
#     Root location: data
#     Split: train-standard
#     Small: False

trainstd_large_data.root
# 'data'

trainstd_large_data.split
# 'train-standard'

trainstd_large_data.small
# False

trainstd_large_data.download_devkit
trainstd_large_data.download_images
# <bound method Places365.download_devkit of Dataset Places365
#     Number of datapoints: 1803460
#     Root location: data
#     Split: train-standard
#     Small: False>

print(trainstd_large_data.transform)
# None

print(trainstd_large_data.target_transform)
# None

trainstd_large_data.loader
# <function torchvision.datasets.folder.default_loader(path: str) -> Any>

len(trainstd_large_data.classes), trainstd_large_data.classes
# (365,
#  ['/a/airfield', '/a/airplane_cabin', '/a/airport_terminal',
#   '/a/alcove', '/a/alley', '/a/amphitheater', '/a/amusement_arcade',
#   '/a/amusement_park', '/a/apartment_building/outdoor',
#   '/a/aquarium', '/a/aqueduct', '/a/arcade', '/a/arch',
#   '/a/archaelogical_excavation', ..., '/y/youth_hostel', '/z/zen_garden'])

trainstd_large_data[0]
# (<PIL.Image.Image image mode=RGB size=683x512>, 0)

trainstd_large_data[1]
# (<PIL.Image.Image image mode=RGB size=768x512>, 0)

trainstd_large_data[2]
# (<PIL.Image.Image image mode=RGB size=718x512>, 0)

trainstd_large_data[5000]
# (<PIL.Image.Image image mode=RGB size=512x683>, 1)

trainstd_large_data[10000]
# (<PIL.Image.Image image mode=RGB size=683x512>, 2)

trainstd_small_data[0]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainstd_small_data[1]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainstd_small_data[2]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainstd_small_data[5000]
# (<PIL.Image.Image image mode=RGB size=256x256>, 1)

trainstd_small_data[10000]
# (<PIL.Image.Image image mode=RGB size=256x256>, 2)

trainchal_large_data[0]
# (<PIL.Image.Image image mode=RGB size=683x512>, 0)

trainchal_large_data[1]
# (<PIL.Image.Image image mode=RGB size=768x512>, 0)

trainchal_large_data[2]
# (<PIL.Image.Image image mode=RGB size=718x512>, 0)

trainchal_large_data[38567]
# (<PIL.Image.Image image mode=RGB size=512x683>, 1)

trainchal_large_data[47891]
# (<PIL.Image.Image image mode=RGB size=683x512>, 2)

trainchal_small_data[0]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainchal_small_data[1]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainchal_small_data[2]
# (<PIL.Image.Image image mode=RGB size=256x256>, 0)

trainchal_small_data[38567]
# (<PIL.Image.Image image mode=RGB size=256x256>, 1)

trainchal_small_data[47891]
# (<PIL.Image.Image image mode=RGB size=256x256>, 2)

val_large_data[0]
# (<PIL.Image.Image image mode=RGB size=512x772>, 165)

val_large_data[1]
# (<PIL.Image.Image image mode=RGB size=600x493>, 358)

val_large_data[2]
# (<PIL.Image.Image image mode=RGB size=763x512>, 93)

val_large_data[3]
# (<PIL.Image.Image image mode=RGB size=827x512>, 164)

val_large_data[4]
# (<PIL.Image.Image image mode=RGB size=772x512>, 289)

val_small_data[0]
# (<PIL.Image.Image image mode=RGB size=256x256>, 165)

val_small_data[1]
# (<PIL.Image.Image image mode=RGB size=256x256>, 358)

val_small_data[2]
# (<PIL.Image.Image image mode=RGB size=256x256>, 93)

val_small_data[3]
# (<PIL.Image.Image image mode=RGB size=256x256>, 164)

val_small_data[4]
# (<PIL.Image.Image image mode=RGB size=256x256>, 289)

import matplotlib.pyplot as plt

def show_images(data, ims, main_title=None):
    plt.figure(figsize=(12, 6))
    plt.suptitle(t=main_title, y=1.0, fontsize=14)
    for i, j in enumerate(iterable=ims, start=1):
        plt.subplot(2, 5, i)
        im, lab = data[j]
        plt.imshow(X=im)
        plt.title(label=lab)
    plt.tight_layout(h_pad=3.0)
    plt.show()

trainstd_ims = (0, 1, 2, 5000, 10000, 15000, 20000, 25000, 30000, 35000)
trainchal_ims = (0, 1, 2, 38567, 47891, 74902, 98483, 137663, 150035, 161052)
val_ims = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

show_images(data=trainstd_large_data, ims=trainstd_ims,
            main_title="trainstd_large_data")
show_images(data=trainstd_small_data, ims=trainstd_ims,
            main_title="trainstd_small_data")
show_images(data=trainchal_large_data, ims=trainchal_ims,
            main_title="trainchal_large_data")
show_images(data=trainchal_small_data, ims=trainchal_ims,
            main_title="trainchal_small_data")
show_images(data=val_large_data, ims=val_ims,
            main_title="val_large_data")
show_images(data=val_small_data, ims=val_ims,
            main_title="val_small_data")
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Image description

Image description

Image description

Image description

Billboard image

The Next Generation Developer Platform

Coherence is the first Platform-as-a-Service you can control. Unlike "black-box" platforms that are opinionated about the infra you can deploy, Coherence is powered by CNC, the open-source IaC framework, which offers limitless customization.

Learn more

Top comments (0)

Heroku

This site is powered by Heroku

Heroku was created by developers, for developers. Get started today and find out why Heroku has been the platform of choice for brands like DEV for over a decade.

Sign Up

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay