All of Julia in 5 minute or less
for people with some coding experience.
Array notation
# Arrays have commas
[1, 2, 3]
3-element Vector{Int64}: 1 2 3
# Matrices have spaces for columns and semicolons for rows
[1 2; 3 4]
2×2 Matrix{Int64}: 1 2 3 4
[1 2
3 4]
2×2 Matrix{Int64}: 1 2 3 4
# Tensors have triple semicolons
[1 2
3 4 ;;;
5 6
7 8]
2×2×2 Array{Int64, 3}: [:, :, 1] = 1 2 3 4 [:, :, 2] = 5 6 7 8
counting starts from 1. Deal with it ;-P
tenzor = [1 2
3 4 ;;;
5 6
7 8]
2×2×2 Array{Int64, 3}: [:, :, 1] = 1 2 3 4 [:, :, 2] = 5 6 7 8
matriz = tenzor[:,1,:] # notice the projection
2×2 Matrix{Int64}: 1 5 3 7
Array algebra
matriz + matriz
2×2 Matrix{Int64}: 2 10 6 14
matriz + matriz'
2×2 Matrix{Int64}: 2 8 8 14
the dot . broadcasts a function over an array:
matriz .+ 1
2×2 Matrix{Int64}: 2 6 4 8
(1:5) * (1:5)'
5×5 Matrix{Int64}: 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 5 10 15 20 25
(1:5) .* (1:5)
5-element Vector{Int64}: 1 4 9 16 25
GPU
using CUDA # <- load a library
# I don't have an NVIDIA gpu here
# go on malatesta
# x = cu(rand(5, 3))
Type type typed
Julia has a rich type system, and powerful dispatch methods.
Use them, they are your friends.
typeof(π)
Irrational{:π}
typeof(3.14)
Float64
Real[1, 2.0, π]
3-element Vector{Real}: 1 2.0 π = 3.1415926535897...
Float64[1,2,π]
3-element Vector{Float64}: 1.0 2.0 3.141592653589793
There is a couple of ways of defining functions. This is one:
fff(x) = typeof(x) == Int ? "f"^x : "miao"
fff (generic function with 1 method)
we can do better using types and dispatching
begin
ff(x) = "miao"
ff(x::Int) = "f"^x
end
ff (generic function with 2 methods)
ff(4.5)
"miao"
ff(8)
"ffffffff"
ff.([1,4,"ciao",4.0,6])
5-element Vector{String}: "f" "ffff" "miao" "miao" "ffffff"
@code_lowered ff(8)
CodeInfo( 1 ─ %1 = "f" ^ x └── return %1 )
@code_typed ff(8)
CodeInfo( 1 ─ %1 = invoke Base.repeat("f"::String, x::Int64)::String └── return %1 ) => String
@code_llvm ff(8)
@code_native ff(8)
AI
using Flux
using Flux: params
NN_model = Chain(
Dense(10,5,relu),
Dense(5,2,relu),
softmax
)
Chain( Dense(10 => 5, relu), # 55 parameters Dense(5 => 2, relu), # 12 parameters NNlib.softmax, ) # Total: 4 arrays, 67 parameters, 524 bytes.
pₙₙ = params(NN_model)
Params([Float32[-0.3394587 -0.4917924 … -0.42839822 0.14853561; 0.5374313 -0.36971772 … 0.2575355 -0.59711635; … ; -0.5401597 -0.0074484563 … 0.12053383 0.4985586; -0.25185105 0.20210315 … -0.10787606 0.57953155], Float32[0.0, 0.0, 0.0, 0.0, 0.0], Float32[0.6839579 -0.8135689 … -0.75520873 -0.7309942; -0.3706536 -0.7086955 … -0.20821618 0.8319535], Float32[0.0, 0.0]])
rand_input = rand(Float32, 10)
10-element Vector{Float32}: 0.8823784 0.62837523 0.26190466 0.5754853 0.2443623 0.26416337 0.7277685 0.4581483 0.06500381 0.40792662
l(x) = Flux.Losses.crossentropy(NN_model(x), [0.5, 0.5])
l (generic function with 1 method)
l(rand_input)
0.698170393705368
grads = gradient(pₙₙ) do
l(rand_input)
end
Grads(...)
for p in pₙₙ
println(grads[p])
end
using Flux.Optimise: update!, Descent
begin
η = 0.1
for p in pₙₙ
update!(p, η * grads[p])
end
end
And a large varaiety of optimisers are predefined
my_optimisers = Descent(0.01)
Descent(0.01)
data, labels = rand(10, 100), fill(0.5, 2, 100)
([0.7815919226843753 0.6174080820903107 … 0.2576920239695416 0.039748842399825235; 0.5353542480610113 0.9182202481854659 … 0.26564117585104485 0.2434559057716722; … ; 0.9837556448226826 0.49472477351996835 … 0.9096512833979349 0.9179341448593525; 0.006631804018282228 0.009824532641440786 … 0.18500130003408688 0.6764573810543957], [0.5 0.5 … 0.5 0.5; 0.5 0.5 … 0.5 0.5])
my_loss(x, y) = Flux.Losses.crossentropy(NN_model(x), y)
my_loss (generic function with 1 method)
Flux.train!(
my_loss,
pₙₙ,
[(data,labels)],
my_optimisers)
Built with Julia 1.8.3 and
CUDA 3.12.0Flux 0.13.9
To run this tutorial locally, download [this file](/tutorials/00FastIntro.jl) and open it with Pluto.jl.