Julia 基于Flux深度學習框架的cifar10數(shù)據(jù)集分類
目錄
一、安裝Julia
二、flux簡介
三、安裝Flux和相關依賴庫
四、cifar10項目下載
*五、cifar10數(shù)據(jù)集下載
六、開始訓練
一、安裝Julia
IDE是Atom,安裝和使用教程為:Windows10 Atom安裝和運行Julia的使用教程(詳細)
二、Flux簡介
1.Flux.jl是一個內(nèi)置于Julia的機器學習框架。它與PyTorch有一些相似之處,就像大多數(shù)現(xiàn)代框架一樣。
2.Flux是一種優(yōu)雅的機器學習方法。 它是100%純Julia堆棧形式,并在Julia的原生GPU和AD支持之上提供輕量級抽象。
3.Flux是一個用于機器學習的庫。 它功能強大,具有即插即拔的靈活性,即內(nèi)置了許多有用的工具,但也可以在需要的地方使用Julia語言的全部功能。
4.Flux遵循以下幾個關鍵原則:
(1)?Flux對于正則化或嵌入等功能的顯式API相對較少。 相反,寫下數(shù)學形式將起作用 ,并且速度很快。
(2)?所有的知識和工具,從LSTM到GPU內(nèi)核,都是簡單的Julia代碼。 如果有疑問的話,可以查看官方教程。 如果需要不同的函數(shù)塊或者是功能模塊,我們也可以輕松自己動手實現(xiàn)。
(3)Flux適用于Julia庫,包括從數(shù)據(jù)幀和圖像到差分方程求解器等等內(nèi)容,因此我們也可以輕松構(gòu)建集成Flux模型的復雜數(shù)據(jù)處理流水線。
5.Flux相關教程鏈接(FQ):https://fluxml.ai/Flux.jl/stable/
6.Flux模型代碼示例鏈接:https://github.com/FluxML/model-zoo/
三、安裝Flux和相關依賴庫
1.打開julia控制臺,或者打開Atom啟動下方REPL的julia,先輸入如下指令
using Pkg
2.安裝Flux
Pkg.add("Flux")
3.同理,安裝依賴項Metalhead
Pkg.add("Metalhead")
Pkg.add("Images")
Pkg.add("Statistics")
一般安裝了Metalhead也會自動幫你裝上Images和Statistics~
四、cifar10項目下載
1.下載model-zoo文件夾:https://github.com/FluxML/model-zoo/
2.cifar10.jl在model-zoo-master\vision\cifar10中
3.我們在Atom里打開這個項目,如下
*五、cifar10數(shù)據(jù)集下載
1.github上的model-zoo里cifar10的下載函數(shù)里面解壓的方式是Linux的,在
C:\Users\你的電腦用戶名\.julia\packages\Metalhead\fYeSU\src\datasets\autodetect.jl:
function download(which)
if which === ImageNet
error("ImageNet is not automatiacally downloadable. See instructions in datasets/README.md")
elseif which == CIFAR10
local_path = joinpath(@__DIR__, "..","..",datasets, "cifar-10-binary.tar.gz")
#print(local_path)
dir_path = joinpath(@__DIR__,"..","..","datasets")
if(!isdir(joinpath(dir_path, "cifar-10-batches-bin")))
if(!isfile(local_path))
Base.download("https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz", local_path)
end
run(`tar -xzvf $local_path -C $dir_path`)
end
else
error("Download not supported for $(which)")
end
end
這意味著解壓函數(shù)在windows10上是無效的,但是這并不影響我們在windows上的使用,我們只需要手動下載即可
2.-:https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
3.下載完成后請放到這個文件夾(其實是放到這里是為了配合Linux操作系統(tǒng)):
C:\Users\你的電腦用戶名\.julia\packages\Metalhead\fYeSU\datasets
解壓后的內(nèi)容如下:
注意:不放這里,你就等著報錯報到死吧!!那就是無法找到cifar10數(shù)據(jù)集位置!!
六、開始訓練
1.核心代碼
cifar10.jl
using Flux, Metalhead, Statistics
using Flux: onehotbatch, onecold, crossentropy, throttle
using Metalhead: trainimgs
using Images: channelview
using Statistics: mean
using Base.Iterators: partition
# VGG16 and VGG19 models
vgg16() = Chain(
Conv((3, 3), 3 => 64, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(64),
Conv((3, 3), 64 => 64, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(64),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 64 => 128, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(128),
Conv((3, 3), 128 => 128, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(128),
x -> maxpool(x, (2,2)),
Conv((3, 3), 128 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 256 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
x -> maxpool(x, (2, 2)),
x -> reshape(x, :, size(x, 4)),
Dense(512, 4096, relu),
Dropout(0.5),
Dense(4096, 4096, relu),
Dropout(0.5),
Dense(4096, 10),
softmax) |> gpu
vgg19() = Chain(
Conv((3, 3), 3 => 64, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(64),
Conv((3, 3), 64 => 64, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(64),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 64 => 128, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(128),
Conv((3, 3), 128 => 128, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(128),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 128 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(256),
Conv((3, 3), 256 => 256, relu, pad=(1, 1), stride=(1, 1)),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 256 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
x -> maxpool(x, (2, 2)),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
BatchNorm(512),
Conv((3, 3), 512 => 512, relu, pad=(1, 1), stride=(1, 1)),
x -> maxpool(x, (2, 2)),
x -> reshape(x, :, size(x, 4)),
Dense(512, 4096, relu),
Dropout(0.5),
Dense(4096, 4096, relu),
Dropout(0.5),
Dense(4096, 10),
softmax) |> gpu
# Function to convert the RGB image to Float64 Arrays
getarray(X) = Float32.(permutedims(channelview(X), (2, 3, 1)))
# Fetching the train and validation data and getting them into proper shape
X = trainimgs(CIFAR10)
imgs = [getarray(X[i].img) for i in 1:50000]
labels = onehotbatch([X[i].ground_truth.class for i in 1:50000],1:10)
train = gpu.([(cat(imgs[i]..., dims = 4), labels[:,i]) for i in partition(1:49000, 100)])
valset = collect(49001:50000)
valX = cat(imgs[valset]..., dims = 4) |> gpu
valY = labels[:, valset] |> gpu
# Defining the loss and accuracy functions
m = vgg16()
loss(x, y) = crossentropy(m(x), y)
accuracy(x, y) = mean(onecold(m(x), 1:10) .== onecold(y, 1:10))
# Defining the callback and the optimizer
evalcb = throttle(() -> @show(accuracy(valX, valY)), 10)
opt = ADAM()
# Starting to train models
Flux.train!(loss, params(m), train, opt, cb = evalcb)
# Fetch the test data from Metalhead and get it into proper shape.
# CIFAR-10 does not specify a validation set so valimgs fetch the testdata instead of testimgs
test = valimgs(CIFAR10)
testimgs = [getarray(test[i].img) for i in 1:10000]
testY = onehotbatch([test[i].ground_truth.class for i in 1:10000], 1:10) |> gpu
testX = cat(testimgs..., dims = 4) |> gpu
# Print the final accuracy
@show(accuracy(testX, testY))
2.菜單欄里 Packages->Julia->Run File,可以在REPL里看到訓練的效果,也就是最后一句代碼展示準確度
3.至于如何放到GPU上訓練,我們還需要下載CuArrays:
Using Pkg
Pkg.add("CuArrays")
以及安裝CUDA和cuDNN支持,具體細節(jié)看官方文檔:https://fluxml.ai/Flux.jl/stable/gpu/#Installation-
機器學習 深度學習
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔相應法律責任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。