圖像分類實戰——使用VGG16實現對植物幼苗的分類(pytroch)

      網友投稿 1169 2022-05-30

      目錄

      摘要

      新建項目

      導入所需要的庫

      設置全局參數

      圖像預處理

      讀取數據

      設置模型

      設置訓練和驗證

      完整代碼

      摘要

      我們這次運用經典的圖像分類模型VGG16,實現對植物幼苗的分類,數據集鏈接:https://pan.baidu.com/s/1JIczDc7VP-PMBnF71302dA?提取碼:rqne ,共有12個類別。下面展示圖片的樣例。

      大部分的圖像是位深度為24位的圖像,有個別的是32位的,所以在處理圖像時要做強制轉換。在這里有一點要提醒大家,拿到數據集,不要上來就搞算法,先去瀏覽一下數據集,了解數據集是什么樣子的,圖片有多少,識別難易程度做個初步的認識。

      模型采用VGG,模型的詳細介紹參照:【圖像分類】一文學會VGGNet(pytorch)_AI浩-CSDN博客。

      接下來講講如何使用VGG實現植物幼苗的分類。

      【圖像分類】實戰——使用VGG16實現對植物幼苗的分類(pytroch)

      新建項目

      新建一個圖像分類的項目,data里面放數據集,dataset文件夾中自定義數據的讀取方法,這次我不采用默認的讀取方式,太簡單沒啥意思。然后再新建train.py和test.py

      在項目的根目錄新建train.py,然后在里面寫訓練代碼。

      導入所需要的庫

      import torch.optim as optim import torch import torch.nn as nn import torch.nn.parallel import torch.utils.data import torch.utils.data.distributed import torchvision.transforms as transforms from dataset.dataset import DogCat from torch.autograd import Variable from torchvision.models import vgg16

      設置全局參數

      設置BatchSize、學習率和epochs,判斷是否有cuda環境,如果沒有設置為cpu。

      # 設置全局參數 modellr = 1e-4 BATCH_SIZE = 32 EPOCHS = 10 DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

      圖像預處理

      在做圖像與處理時,train數據集的transform和驗證集的transform分開做,train的圖像處理出了resize和歸一化之外,還可以設置圖像的增強,比如旋轉、隨機擦除等一系列的操作,驗證集則不需要做圖像增強,另外不要盲目的做增強,不合理的增強手段很可能會帶來負作用,甚至出現Loss不收斂的情況

      # 數據預處理 transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) transform_test = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ])

      讀取數據

      將數據集解壓后放到data文件夾下面,如圖:

      然后我們在dataset文件夾下面新建?__init__.py和dataset.py,在dataset.py文件夾寫入下面的代碼:

      說一下代碼的核心邏輯。

      第一步 建立字典,定義類別對應的ID,用數字代替類別。

      第二步 在__init__里面編寫獲取圖片路徑的方法。測試集只有一層路徑直接讀取,訓練集在train文件夾下面是類別文件夾,先獲取到類別,再獲取到具體的圖片路徑。然后使用sklearn中切分數據集的方法,按照7:3的比例切分訓練集和驗證集。

      第三步 在__getitem__方法中定義讀取單個圖片和類別的方法,由于圖像中有位深度32位的,所以我在讀取圖像的時候做了轉換。

      # coding:utf8 import os from PIL import Image from torch.utils import data from torchvision import transforms as T from sklearn.model_selection import train_test_split Labels = {'Black-grass': 0, 'Charlock': 1, 'Cleavers': 2, 'Common Chickweed': 3, 'Common wheat': 4, 'Fat Hen': 5, 'Loose Silky-bent': 6, 'Maize': 7, 'Scentless Mayweed': 8, 'Shepherds Purse': 9, 'Small-flowered Cranesbill': 10, 'Sugar beet': 11} class SeedlingData (data.Dataset): def __init__(self, root, transforms=None, train=True, test=False): """ 主要目標: 獲取所有圖片的地址,并根據訓練,驗證,測試劃分數據 """ self.test = test self.transforms = transforms if self.test: imgs = [os.path.join(root, img) for img in os.listdir(root)] self.imgs = imgs else: imgs_labels = [os.path.join(root, img) for img in os.listdir(root)] imgs = [] for imglable in imgs_labels: for imgname in os.listdir(imglable): imgpath = os.path.join(imglable, imgname) imgs.append(imgpath) trainval_files, val_files = train_test_split(imgs, test_size=0.3, random_state=42) if train: self.imgs = trainval_files else: self.imgs = val_files def __getitem__(self, index): """ 一次返回一張圖片的數據 """ img_path = self.imgs[index] img_path=img_path.replace("\\",'/') if self.test: label = -1 else: labelname = img_path.split('/')[-2] label = Labels[labelname] data = Image.open(img_path).convert('RGB') data = self.transforms(data) return data, label def __len__(self): return len(self.imgs)

      然后我們在train.py調用SeedlingData讀取數據 ,記著導入剛才寫的dataset.py(from dataset.dataset import SeedlingData)

      dataset_train = SeedlingData('data/train', transforms=transform, train=True) dataset_test = SeedlingData("data/train", transforms=transform_test, train=False) # 讀取數據 print(dataset_train.imgs) # 導入數據 train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=False)

      設置模型

      使用CrossEntropyLoss作為loss,模型采用alexnet,選用預訓練模型。更改全連接層,將最后一層類別設置為12,然后將模型放到DEVICE。優化器選用Adam。

      # 實例化模型并且移動到GPU criterion = nn.CrossEntropyLoss() model_ft = vgg16(pretrained=True) model_ft.classifier = classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 12), ) model_ft.to(DEVICE) # 選擇簡單暴力的Adam優化器,學習率調低 optimizer = optim.Adam(model_ft.parameters(), lr=modellr) def adjust_learning_rate(optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" modellrnew = modellr * (0.1 ** (epoch // 50)) print("lr:", modellrnew) for param_group in optimizer.param_groups: param_group['lr'] = modellrnew

      設置訓練和驗證

      # 定義訓練過程 def train(model, device, train_loader, optimizer, epoch): model.train() sum_loss = 0 total_num = len(train_loader.dataset) print(total_num, len(train_loader)) for batch_idx, (data, target) in enumerate(train_loader): data, target = Variable(data).to(device), Variable(target).to(device) output = model(data) loss = criterion(output, target) optimizer.zero_grad() loss.backward() optimizer.step() print_loss = loss.data.item() sum_loss += print_loss if (batch_idx + 1) % 10 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, (batch_idx + 1) * len(data), len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item())) ave_loss = sum_loss / len(train_loader) print('epoch:{},loss:{}'.format(epoch, ave_loss)) # 驗證過程 def val(model, device, test_loader): model.eval() test_loss = 0 correct = 0 total_num = len(test_loader.dataset) print(total_num, len(test_loader)) with torch.no_grad(): for data, target in test_loader: data, target = Variable(data).to(device), Variable(target).to(device) output = model(data) loss = criterion(output, target) _, pred = torch.max(output.data, 1) correct += torch.sum(pred == target) print_loss = loss.data.item() test_loss += print_loss correct = correct.data.item() acc = correct / total_num avgloss = test_loss / len(test_loader) print('\nVal set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( avgloss, correct, len(test_loader.dataset), 100 * acc)) # 訓練 for epoch in range(1, EPOCHS + 1): adjust_learning_rate(optimizer, epoch) train(model_ft, DEVICE, train_loader, optimizer, epoch) val(model_ft, DEVICE, test_loader) torch.save(model_ft, 'model.pth')

      測試

      我介紹兩種常用的測試方式,第一種是通用的,通過自己手動加載數據集然后做預測,具體操作如下:

      測試集存放的目錄如下圖:

      第一步 定義類別,這個類別的順序和訓練時的類別順序對應,一定不要改變順序!!!!

      第二步 定義transforms,transforms和驗證集的transforms一樣即可,別做數據增強。

      第三步 加載model,并將模型放在DEVICE里,

      第四步 讀取圖片并預測圖片的類別,在這里注意,讀取圖片用PIL庫的Image。不要用cv2,transforms不支持。

      import torch.utils.data.distributed import torchvision.transforms as transforms from PIL import Image from torch.autograd import Variable import os classes = ('Black-grass', 'Charlock', 'Cleavers', 'Common Chickweed', 'Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize','Scentless Mayweed','Shepherds Purse','Small-flowered Cranesbill','Sugar beet') transform_test = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = torch.load("model.pth") model.eval() model.to(DEVICE) path='data/test/' testList=os.listdir(path) for file in testList: img=Image.open(path+file) img=transform_test(img) img.unsqueeze_(0) img = Variable(img).to(DEVICE) out=model(img) # Predict _, pred = torch.max(out.data, 1) print('Image Name:{},predict:{}'.format(file,classes[pred.data.item()]))

      第二種 使用自定義的Dataset讀取圖片

      import torch.utils.data.distributed import torchvision.transforms as transforms from dataset.dataset import SeedlingData from torch.autograd import Variable classes = ('Black-grass', 'Charlock', 'Cleavers', 'Common Chickweed', 'Common wheat','Fat Hen', 'Loose Silky-bent', 'Maize','Scentless Mayweed','Shepherds Purse','Small-flowered Cranesbill','Sugar beet') transform_test = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = torch.load("model.pth") model.eval() model.to(DEVICE) dataset_test =SeedlingData('data/test/', transform_test,test=True) print(len(dataset_test)) # 對應文件夾的label for index in range(len(dataset_test)): item = dataset_test[index] img, label = item img.unsqueeze_(0) data = Variable(img).to(DEVICE) output = model(data) _, pred = torch.max(output.data, 1) print('Image Name:{},predict:{}'.format(dataset_test.imgs[index], classes[pred.data.item()])) index += 1

      完整代碼

      train.py

      import torch.optim as optim import torch import torch.nn as nn import torch.nn.parallel import torch.utils.data import torch.utils.data.distributed import torchvision.transforms as transforms from dataset.dataset import SeedlingData from torch.autograd import Variable from torchvision.models import vgg16 # 設置全局參數 modellr = 1e-4 BATCH_SIZE = 32 EPOCHS = 10 DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # 數據預處理 transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) transform_test = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]) ]) dataset_train = SeedlingData('data/train', transforms=transform, train=True) dataset_test = SeedlingData("data/train", transforms=transform_test, train=False) # 讀取數據 print(dataset_train.imgs) # 導入數據 train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=False) # 實例化模型并且移動到GPU criterion = nn.CrossEntropyLoss() model_ft = vgg16(pretrained=True) model_ft.classifier = classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 12), ) model_ft.to(DEVICE) # 選擇簡單暴力的Adam優化器,學習率調低 optimizer = optim.Adam(model_ft.parameters(), lr=modellr) def adjust_learning_rate(optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" modellrnew = modellr * (0.1 ** (epoch // 50)) print("lr:", modellrnew) for param_group in optimizer.param_groups: param_group['lr'] = modellrnew # 定義訓練過程 def train(model, device, train_loader, optimizer, epoch): model.train() sum_loss = 0 total_num = len(train_loader.dataset) print(total_num, len(train_loader)) for batch_idx, (data, target) in enumerate(train_loader): data, target = Variable(data).to(device), Variable(target).to(device) output = model(data) loss = criterion(output, target) optimizer.zero_grad() loss.backward() optimizer.step() print_loss = loss.data.item() sum_loss += print_loss if (batch_idx + 1) % 10 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, (batch_idx + 1) * len(data), len(train_loader.dataset), 100. * (batch_idx + 1) / len(train_loader), loss.item())) ave_loss = sum_loss / len(train_loader) print('epoch:{},loss:{}'.format(epoch, ave_loss)) # 驗證過程 def val(model, device, test_loader): model.eval() test_loss = 0 correct = 0 total_num = len(test_loader.dataset) print(total_num, len(test_loader)) with torch.no_grad(): for data, target in test_loader: data, target = Variable(data).to(device), Variable(target).to(device) output = model(data) loss = criterion(output, target) _, pred = torch.max(output.data, 1) correct += torch.sum(pred == target) print_loss = loss.data.item() test_loss += print_loss correct = correct.data.item() acc = correct / total_num avgloss = test_loss / len(test_loader) print('\nVal set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( avgloss, correct, len(test_loader.dataset), 100 * acc)) # 訓練 for epoch in range(1, EPOCHS + 1): adjust_learning_rate(optimizer, epoch) train(model_ft, DEVICE, train_loader, optimizer, epoch) val(model_ft, DEVICE, test_loader) torch.save(model_ft, 'model.pth')

      vgg實現植物幼苗分類.rar-深度學習文檔類資源-CSDN下載

      機器學習

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      上一篇:游戲開發中的物理介紹
      下一篇:Go 語言入門很簡單:讀寫鎖
      相關文章
      亚洲国产区男人本色在线观看| 亚洲va在线va天堂成人| 亚洲国产综合AV在线观看| 亚洲1234区乱码| 亚洲av无码不卡久久| 亚洲性猛交xx乱| 亚洲成a人片在线观看中文app| 亚洲影院在线观看| 伊人久久综在合线亚洲2019| 亚洲天堂一区二区| 久久综合亚洲鲁鲁五月天| 久久精品国产亚洲AV电影| 亚洲日本va午夜中文字幕一区| 久久久久久亚洲精品中文字幕| 亚洲精品V欧洲精品V日韩精品| 亚洲人成人77777网站| 亚洲一区二区三区影院 | 亚洲AV无码久久| 亚洲AV无码一区二区三区系列| 久久久久久亚洲av成人无码国产| 亚洲成AV人片天堂网无码| 久久久久亚洲Av无码专| 亚洲国产精品成人综合色在线婷婷| 亚洲图片中文字幕| 亚洲综合av一区二区三区| 亚洲av无码日韩av无码网站冲 | 亚洲人成7777影视在线观看| 亚洲导航深夜福利| 亚洲色欲色欲www| 亚洲一久久久久久久久| 美国毛片亚洲社区在线观看| 亚洲av日韩综合一区二区三区| 亚洲第一se情网站| 久久夜色精品国产亚洲av| 久久久青草青青亚洲国产免观| 亚洲免费视频网站| 亚洲国产精品网站久久| 亚洲精品无码一区二区| 无码专区一va亚洲v专区在线 | 亚洲成年人免费网站| 久久亚洲精品专区蓝色区|