Skip to main content
Главная страница » Football » Naftan (w) (Belarus)

Naftan Women's FC: Champions of Belarusian Football - Squad, Stats & Achievements

Overview of Naftan (w) Football Team

The Naftan (w) football team, hailing from Belarus, competes in the top tier of women’s football. Known for its dynamic play and strategic prowess, the team is managed by a seasoned coach and was founded in 2013. Competing in the Belarusian Women’s Football Championship, Naftan (w) has established itself as a formidable force.

Team History and Achievements

Since its inception, Naftan (w) has made significant strides in women’s football. The team boasts multiple league titles and has consistently ranked among the top in national competitions. Notable seasons include their championship win in 2018 and several runner-up finishes that highlight their competitive edge.

Current Squad and Key Players

The current squad features a mix of experienced veterans and promising young talent. Key players include:

  • Anna Kovalchuk: Striker known for her goal-scoring ability.
  • Elena Petrovskaya: Midfielder with exceptional playmaking skills.
  • Maria Ivanova: Defender renowned for her defensive acumen.

Team Playing Style and Tactics

Naftan (w) employs a versatile 4-3-3 formation, focusing on high pressing and quick transitions. Their strengths lie in their offensive strategies and cohesive teamwork, while their weaknesses may include occasional lapses in defensive organization.

Interesting Facts and Unique Traits

The team is affectionately nicknamed “The Oil Ladies” due to their association with the Naftan oil refinery. They have a passionate fanbase known as “The Green Brigade,” who are known for their vibrant support during matches. Rivalries with teams like Minsk City FC add an exciting dynamic to their games.

Lists & Rankings of Players, Stats, or Performance Metrics

  • ✅ Anna Kovalchuk – Top goal scorer with 15 goals this season.
  • ❌ Defensive errors – A noted area for improvement.
  • 🎰 Elena Petrovskaya – Assists leader with 10 assists.
  • 💡 Maria Ivanova – Consistently rated as one of the top defenders.

Comparisons with Other Teams in the League or Division

When compared to other teams in the league, Naftan (w) stands out for its balanced attack and solid defense. They often outperform rivals like Dynamo Minsk in terms of possession and passing accuracy, making them a favorite among analysts.

Case Studies or Notable Matches

A breakthrough game for Naftan (w) was their 3-0 victory against BATE Borisov Women’s Team in 2019, which showcased their tactical superiority and set the tone for future successes. This match remains a highlight in their recent history.

Statistic Naftan (w) Rival Team
Total Goals Scored This Season 40 30
Last Five Matches Form W-W-D-L-W L-D-W-L-W
Head-to-Head Record Against Top Rivals 6W-1D-3L N/A

Tips & Recommendations for Analyzing the Team or Betting Insights 💡 Advice Blocks 💡

To effectively analyze Naftan (w), focus on their recent form, key player performances, and head-to-head records against upcoming opponents. Betting insights suggest considering odds fluctuations based on team news such as injuries or suspensions.

Famous Quote About Naftan (w)

“Naftan (w) combines tactical intelligence with raw talent, making them a thrilling team to watch.” – Sports Analyst Jane Doe.

Pros & Cons of the Team’s Current Form or Performance ✅❌ Lists ✅❌

  • ✅ Strong offensive lineup capable of scoring multiple goals per match.
  • JiaqiLi/STN-NET/models.py
    import torch.nn as nn
    import torch.nn.functional as F

    class STN3d(nn.Module):
    def __init__(self):
    super(STN3d,self).__init__()
    self.conv1 = torch.nn.Conv1d(2500,64,kernel_size=1)
    self.conv2 = torch.nn.Conv1d(64,128,kernel_size=1)
    self.conv3 = torch.nn.Conv1d(128,1024,kernel_size=1)

    self.fc1 = nn.Linear(1024,512)
    self.fc2 = nn.Linear(512,256)
    self.fc3 = nn.Linear(256,9)

    self.relu = nn.ReLU()

    def forward(self,x):
    x = F.relu(self.conv1(x))
    x = F.relu(self.conv2(x))
    x = F.relu(self.conv3(x))

    x,_ = torch.max(x,dim=-1)

    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))

    x = self.fc3(x)

    iden = Variable(torch.from_numpy(np.array([ [ [ [ 1., 0., 0., ],
    [ 0., 1., 0., ],
    [ 0., 0., 1., ]]])).float())

    if __name__ == ‘__main__’:
    passJiaqiLi/STN-NET<|file_sep[//]: # (Image References)
    [transformed]: ./misc_images/transformed.png "Transformed"
    [before]: ./misc_images/before.png "Before"
    [after]: ./misc_images/after.png "After"

    # Spatial Transformer Networks

    ## Overview

    This project implements spatial transformer networks using Pytorch.

    ## Introduction

    Spatial transformer networks allow spatial manipulation of data within neural networks.
    For example when processing an image it allows you to zoom into an image,
    focus on certain parts etc.

    ![alt text][before]
    ![alt text][transformed]
    ![alt text][after]

    ### What are spatial transformers?

    Spatial transformers allow your neural network to learn how to perform spatial transformations on an input image.
    They can be used anywhere you want some sort of translation invariant behaviour.
    In this project we will be looking at applying spatial transformers to point cloud data.

    ### What are point clouds?

    Point clouds are sets of vertices defined by X,Y,Z coordinates.
    These points define objects within space.
    You can think of point clouds like this:

    ![alt text][point_cloud]

    We will be using point clouds that contain objects like chairs,
    tables etc.
    You can see some examples below:

    ![alt text][objects]

    ## Implementation

    Here we will go through my implementation step by step.

    ### Loading Data

    I'm using data from PointNet:
    https://github.com/charlesq34/pointnet/blob/master/model.py
    which is publicly available here:
    https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip

    I've modified it slightly so that it outputs batches instead of single examples.
    This code is available here:
    https://github.com/JiaqiLi/STN-NET/blob/master/data_utils.py

    Here is what one batch looks like:

    ![alt text][batch]

    ### Defining Spatial Transformer Module

    First I define my STN module:

    python
    class STN(nn.Module):
    def __init__(self,num_points=2500):
    super(STN,self).__init__()
    self.num_points=num_points

    self.conv1=nn.Conv1d(3,64,kernel_size=1)
    self.conv21=nn.Conv1d(64,128,kernel_size=1)
    self.conv22=nn.Conv1d(128,128,kernel_size=1)
    self.conv31=nn.Conv1d(128,1024,kernel_size=1)

    self.fc11=self._fc(1024 ,512)
    self.fc12=self._fc(512 ,256)

    self.fc21=self._fc(1024 ,512)
    self.fc22=self._fc(512 ,256)

    self.fc31=self._fc(1024 ,512)
    self.fc32=self._fc(512 ,256)

    def _fc(self,i,o):
    seq=[]

    seq.append(nn.Linear(i,o,bias=False))

    As you can see it takes one parameter which defines how many points each example contains.
    This module consists mostly of convolutional layers,
    followed by fully connected layers at the end.
    I also added two extra fully connected layers at the end,
    one being responsible for generating rotation matrices,
    the other being responsible for generating translation vectors.

    python
    def forward(self,x):

    x=F.relu(self.conv11(x))
    x=F.relu(self.conv12(x))

    x=F.relu(self.conv21(x))
    x=F.relu(self.conv22(x))

    x=F.relu(self.conv31(x))

    Here I pass our input through all convolutional layers first.

    python
    x,_=torch.max(x,dim=-1)

    Then I apply max pooling along dimension -1.

    python
    x=x.view(-1,self.num_points*1024)

    Then I reshape our output so that we can feed it into fully connected layers.

    python
    x_rot=torch.cat([self.fc11(x),self.fc12(F.sigmoid(F.dropout(F.leaky_relu(self.fc11(x),negative_slope=.01)))),
    self.fc21(F.sigmoid(F.dropout(F.leaky_relu(self.fc21(x),negative_slope=.01)))),self.fc22(F.sigmoid(F.dropout(F.leaky_relu(self.fc22(x),negative_slope=.01))))],dim=-1).view(-1,self.num_points*9)

    x_trans=torch.cat([self.fct11(F.sigmoid(F.dropout(F.leaky_relu(self.fct11(x),negative_slope=.01)))),self.fct12(
    F.sigmoid(F.dropout(F.leaky_relu(self.fct12(
    F.sigmoid(F.dropout(
    F.leaky_relu(
    self.fct11(
    F.sigmoid(
    F.dropout(
    F.leaky_relu(
    self.fct11(
    x)))))))))))))]).view(-idex,-idex,-idex])

    return x_rot,x_trans

    Finally we apply our fully connected layers.
    We create two separate outputs;
    one being rotation matrices,
    the other being translation vectors.
    We then reshape these outputs appropriately before returning them.

    ### Defining Classifier Module

    Next I define my classifier module:

    python
    class PointNetCls(nn.Module):
    def __init__(self,num_points=2500,n_classes=40,stn=True):

    As you can see it takes three parameters;
    the number of points each example contains,
    the number classes we wish to classify our data into,
    and whether we want to use an STN module or not.

    If `stn` is set to `True` then we initialise an instance variable called `stn`,
    which holds an instance of our STN module defined earlier.

    python
    super(PointNetCls,self).__init__()
    if stn:
    stn=self.stn()

    conv11=torch.nn.Convolution(in_channels=(6 if stn else 3),
    out_channels=64,kernel_size=(num_points if stn else num_points+6),
    stride=(num_points if stn else num_points+6),
    dilation=(num_points if stn else num_points+6),
    padding=(num_points if stn else num_points+6))

    conv12=torch.nn.Convolution(in_channels=(64),(128),(num_points if stn else num_points+6),(num_points if stn else num_points+6),(num_points if stn else num_points+6),(num_points if stn else num_points+6))

    conv13=torch.nn.Convolution(in_channels=(128),(1024),(num_pointse if stn else num_pointse+6),(num_pointse if stn else num_pointse+6),(num_pointse if stn else num_pointse+6),(num_pointse if stn else num_pointse+6))

    fcll=torch.nn.Linear((1024),512)

    fcll=torch.nn.Linear((512),256)

    fcll=torch.nn.Linear((256),40)

    def forward(self,x):

    if hasattr(self,'stn'):

    If `stn` was set to `True` then we pass our input through our STN module first:

    python
    rot_matrix,st_vector=self.stm.forward(input_tensor)

    This returns two tensors;
    a rotation matrix tensor which contains rotation matrices corresponding
    to each example within our batch,
    and a translation vector tensor which contains translation vectors corresponding
    to each example within our batch.

    python
    rot_matrix_expand=batch_rots.repeat_interleave(num_pts,axis=-3).reshape(batch_sz,num_pts,-idex,-idex)

    trans_vector_expand=batch_trans.repeat_interleave(num_pts,axis=-idex).reshape(batch_sz,num_pts,-idex)

    x=x.reshape(batch_sz*num_pts,-idex,-idex)

    x=x@rot_matrix_expand

    x=x+x_trans_expand

    Then I expand both tensors so that they match up with each example within our batch
    and apply rotations followed by translations.

    python
    x=x.reshape(batch_sz,num_pts,-idex)

    I then reshape back into original shape.

    python

    x=F.elu(conv11.forward(input_tensor))

    x=F.elu(conv12.forward(input_tensor))

    x,_torch.max(conv13.forward(input_tensor),dim=-idex)

    x=fcll.forward(fcll.forward(fcll.forward(dropout)))

    return x

    Finally I pass everything through my convolutional layers followed by my fully connected layers.

    ### Training Loop

    Next I define my training loop:

    python
    optimizer=tf.train.AdamOptimizer(lr,beta_10,.99)

    loss_fn=tf.losses.softmax_cross_entropy(logits=logits_one_hot_labels)

    train_step=lambda logits_,labels_:optimizer.minimize(loss_fn(logits_,labels_))

    for epoch_i in range(EPOCHS):

    for batch_i,(inputs_,labels_)in enumerate(train_loader):

    labels_=tf.one_hot(labels_,NUM_CLASSES)

    with tf.GradientTape()as tape:

    logits=model(inputs_)

    loss=train_step(logits_,labels_)

    if epoch_i%10==0:

    print("Epoch %i/%i Step %i/%i Loss %.5f"%(epoch_i,EPOCHS,batch_i,len(train_loader.dataset)//BATCH_SIZE,np.mean(loss)))

    print("Epoch %i/%i Step %i/%i Loss %.5f"%(epoch_i,EPOCHS,batch_i,len(train_loader.dataset)//BATCH_SIZE,np.mean(loss)))

    with tf.Session()as sess:

    sess.run(tf.global_variables_initializer())

    model.load_state_dict(torch.load('model.pt'))

    model.eval()

    test_loss=test_accuracy=test_batches=len(test_loader)

    with tqdm(test_loader,pbar_position=0,total=test_batches,tqdm_desc='Testing')as test_tqdm:

    for i,(data,target)in enumerate(test_tqdm):

    data=data.cuda()

    target=target.cuda()

    with torch.no_grad():

    outputs=model(data)

    prediction=output.argmax(dim=-idex,output=None)

    test_loss+=F.cross_entropy(outputs,target,reduction='sum').item()

    test_accuracy+=prediction.eq(target.view_as(prediction)).sum().item()

    test_loss/=len(test_set)*NUM_POINTS

    print('Test Set Summary Epoch %i Accuracy %.5f%%'%(epoch,test_accuracy/test_set.__len__()*100))

    torch.save(model.state_dict(),'model.pt')

    print("Finished Training")

    As you can see there isn't much going on here.
    First I initialise my optimizer using Adam optimization algorithm.
    Then I initialise my loss function using softmax cross entropy loss function.

    Next I run through all epochs specified by EPOCHS variable defined earlier:

    Within each epoch loop I run through all batches specified by BATCH_SIZE variable defined earlier:

    I convert labels into one hot format so that they are compatible with softmax cross entropy loss function.

    Then I create gradient tape context manager so that TensorFlow automatically keeps track
    of operations performed within this context manager block so that gradients can be computed later.

    Within this context manager block:

    I pass inputs through model instance created earlier:

    Compute loss using train_step lambda function created earlier:

    Print information about current epoch every tenth epoch:

    Finally after running through all epochs:

    I initialise session object:

    Load weights saved previously from file model.pt located within working directory:

    Set model instance into evaluation mode;

    Initialise test loss,test accuracy,test batches variables;

    Run through all batches specified by BATCH_SIZE variable defined earlier again but now test time;

    Convert data into CUDA tensors;

    Convert target into CUDA tensors;

    Pass inputs through model instance created earlier without computing gradients;

    Predict class labels based off outputs generated by model instance created earlier;

    Compute test loss,test accuracy;

    Compute average test loss over entire test set;

    Print information about current epoch;

    Save weights generated during training process back into file model.pt located within working directory;

    Print message indicating training finished successfully.

    ## Results

    Here are some results obtained during training process:

    Accuracy vs Epochs plot:

    ![alt text][accuracy_vs_epoch]

    Loss vs Epochs plot:

    ![alt text][loss_vs_epoch]

    Training Time plot:

    ![alt text][training_time]

    ## Discussion

    From above plots you should notice that both accuracy increases over time while loss decreases over time during training phase which indicates successful convergence towards optimal solution.

    ## References

    [PointNet](https://github.com/charlesq34/pointnet/blob/master/model.py)

    [Spatial Transformer Networks](https://arxiv.org/pdf/1506.02025.pdf)<|file_sep# Project Report

    ## Introduction
    A Spatial Transformer Network(STN) is designed such that it allows spatial manipulation
    of data inside neural networks.

    STNs were proposed originally for images but could also be applied
    to any data type.
    It allows us to make models more robust,
    more stable across changes such as scale,
    rotation etc.

    In this project,
    we will implement STNs using PyTorch.
    We will use point cloud dataset containing various shapes such as chairs,
    tables etc.
    and try applying transformations onto them.

    STNs consist mainly three parts:
    Feature Extraction Network,
    Transformation Network,
    Sampling Network.

    Feature Extraction Network extracts features from input data.
    Transformation Network generates transformation parameters.
    Sampling Network applies transformation parameters onto extracted features.

    In order to implement STNs,
    we need feature extraction network first.
    Feature extraction network takes raw input data
    and extracts useful features from it.
    In case of images,
    it could be done using CNNs(Convolutional Neural Networks),
    while for point cloud dataset,
    we could use PointNet.

    PointNet takes raw point cloud dataset as input
    and extracts useful features from it.

    Transformation network generates transformation parameters
    based on extracted features.
    It consists mainly two parts:
    localization network,
    grid generator.

    Localization network predicts transformation parameters based on extracted features.
    Grid generator uses predicted transformation parameters
    to generate grid coordinates.

    Sampling network applies transformation parameters onto extracted features
    by sampling grid coordinates generated by grid generator.

    In order to implement sampling network,
    we need localization network first.
    Localization network predicts transformation parameters based on extracted features.
    It consists mainly two parts:
    regression layer,
    activation layer.

    Regression layer predicts raw transformation parameters based on extracted features.
    Activation layer applies activation functions such as sigmoid onto predicted raw transformation parameters
    to generate final transformation parameters.

    Now let’s look at implementation details:

    ## Implementation Details

    Firstly let’s look at feature extraction part:

    We will use PointNet architecture described here:
    http://shapenet.cs.stanford.edu/media/papers/shrec17-shapenet.pdf

    PointNet consists mainly four parts:
    Input Transformation Layer(T),
    Convolution Layers(C),
    Fully Connected Layers(D),
    Output Transformation Layer(T’).

    Input Transformation Layer(T)
    takes raw point cloud dataset as input
    and transforms them using learned affine transformations T.

    Convolution Layers(C)
    take transformed points as input
    and extract useful features from them using convolutions.

    Fully Connected Layers(D)
    take extracted features as input
    and predict class labels based off them.

    Output Transformation Layer(T’)
    takes predicted class labels as input
    and transforms them again using learned affine transformations T’.

    Now let’s look at implementation details:

    Firstly let’s define PointNet architecture:

    class PointNet(nn.Module):
    def init(n_input,n_output):
    super(PointNet,self).__init__()
    self.t=T()
    self.c=C()
    self.d=D()
    self.t_prime=T_prime()

    def forward(X):
    X_t=t(X)
    X_c=c(X_t)
    X_d=d(X_c)
    Y=t_prime(X_d)

    return Y

    where n_input,n_output are number of inputs,number outputs respectively,

    T() defines Input Transformation Layer,

    C() defines Convolution Layers,

    D() defines Fully Connected Layers,

    T_prime() defines Output Transformation Layer,

    X,X_t,X_c,X_d,Y represent raw points,input transformed points,input convolved points,input fully connected points,output respectively,

    t(),c(),d(),t_prime() represent functions implemented by T(),C(),D(),T’ respectively,

    forward(X) represents forward propagation function taking raw points X as input returning output Y after applying transformations onto them,

    Let’s now look at implementation details:

    class T(nn.Module):
    def init(n_input,n_output):
    super(T,self).__init__()
    self.linear=nn.Linear(n_input,n_output)

    def forward(X):
    return linear(X)

    where n_input,n_output are number inputs,number outputs respectively,

    linear represents linear layer implemented by Linear(n_input,n_output)

    forward(X) represents forward propagation function taking raw points X as input returning transformed points after applying linear layer onto them,

    Let’s now look at implementation details:

    class C(nn.Module):
    def init(n_filters):

    super(C,self).__init__()
    for i,filtersizeinfiltersizes:
    layers.append(ConvLayer(filtersize))

    def forward(X):

    for l,layers:l=X=layers(l)

    return l

    where n_filters is list containing filter sizes used by Convolution Layers,CNN filtersizesrepresent filter sizes used by Convolution Layers,respectively,filtersizesis list containing filter sizes used by Convolution Layers,layersrepresents list containing ConvLayer instances,l represents intermediate output between consecutive layers,respectively,

    ConvLayer(filtersize) represents Convolution Layer implemented using filtersize filtersizespecifically,filtersizeis specific filter size used by particular ConvLayer instance,layers.append(ConvLayer(filtersize)) appends new ConvLayer instance implemented using filtersize filtersizespecifically,l=layers(l) passes intermediate output l between consecutive layers onto next layer returning new intermediate output,l=X=layers(l) passes initial output X between consecutive layers returning final output l,respectively,

    forward(X) represents forward propagation function taking transformed points X as input returning convolved points after applying convolutions onto them,respectively,

    Let’s now look at implementation details:

    class D(nn.Module):
    def init(n_hidden):

    super(D,self).__init__()
    for i,hiddensizeinhidden_sizes:self.layers.append(Layer(hiddensize))

    def forward(X):

    for l,layers:l=X=layers(l)

    return l

    where n_hidden is list containing hidden sizes used by Fully Connected Layers,hiddensizesrepresent hidden sizes used by Fully Connected Layers,respectively,hiddensizesis list containing hidden sizes used by Fully Connected Layers,layersrepresents list containing Layer instances,l represents intermediate output between consecutive layers,respectively,

    Layer(hiddensize)(l)=ReLU(linear(l)) where hiddensizeis specific hidden size used by particular Layer instance,linearspecifically,hiddensizerepresent specific hidden size used by particular Layer instance,hiddensizelinearspecifically,l=hiddensizelinear(l)=ReLU(linear(l)) passes intermediate output l between consecutive layers onto next layer returning new intermediate output,l=X=hiddensizelayer(l)=ReLU(linear(l)) passes initial output X between consecutive layers returning final output l,respectively,

    forward(X) represents forward propagation function taking convolved points X as input returning fully connected points after applying fully connections onto them,respectively,

    Let’s now look at implementation details:

    class T_prime(nn.Module):
    def init():
    super(T_prime,self).__init__()

    def forward(Y):

    return Y

    where Y represents predicted class labels specifically,Y=Y=Y representing identity mapping specifically,youcanmodifythisfunctiondependingonyourspecificneedsrespecitvely(forexampleyoucouldapplyactivationsfunctionslikesigmoidorsoftmaxontoYetc.),forward(Y)(Yrepresentpredictedclasslabelsrespectively,)representsforwardpropagationfunctiontakingfullyconnectedpointsYasinputreturningoutputYafterapplyingoutputtransformationlayerontothemrespectively,youcanmodifythisfunctiondependingonyourspecificneedsrespecitvely(forexampleyoucouldapplyactivationsfunctionslikesigmoidorsoftmaxontoYetc.),

    Now let’s look at localization part:

    LocalisationNetwork takes extracted features from Feature Extraction Network(PointNet)
    as input predicting transformation parameters based off these extracted features.

    LocalisationNetwork consists mainly two parts:
    Regression Layer(R),
    Activation Function(A).

    Regression Layer(R)
    takes extracted features from Feature Extraction Network(PointNet)
    as input predicting raw transformation parameters based off these extracted features specifically,R(features)=linear(features)=parameters representing regressionlayerimplementedusinglinearlayer,linearspecifically,feturesspecifically,paramettersrepresentrawtransformationparamettersrespectively,parametters=linears(features)=linear(features)=parameters predictingrawtransformationparamettersbasedoffextractedfeaturesfromfeatureextrationnetwork(pointnet)specifically,,

    Activation Function(A)
    applies activation functions such sigmoid,sigmoid(parameters)=parametersigmoid(parameters)=parametersigmoid(parametersigmoid(parametersigmoid(parameters)))onto predicted raw transformation parametters generating final transformatiom parametters specifically,A(parametters)=sigmoidsigmoidsigmoidsigmoidsigmod(parametters)=sigmoidsigmoidsigmoidsigmoidsigmod(sigmod(sigmod(sigmod(sigmod(parametters)))))(paramettersrepresentrawtransformationparamettersrespectively,),A(parametters)sigmoidsigmoidsigmoidsigmoidsigmod(sigmod(sigmod(sigmod(sigmod(parametters)))))(paramettersrepresentrawtransformationparamettersrespectively,)representsactivationfunctionappliedontopredictedrawtransformationparamettersgeneratingfinaltransformatiomparametrestransformationsrespectively,,

    Now let’s look at sampling part:

    SamplingNetwork applies transformatiom parametters generated LocalisationNetwork
    onto extracted feautures from Feature Extraction Network(PointNet).

    SamplingNetwork consists mainly three parts:
    Grid Generator(G),
    Sampler(S),
    Transformation Module(T).

    Grid Generator(G)
    uses transformatiom parametriers generated LocalisationNetwork
    to generate grid coordinates specifically,G(transformation_parameters)=(grid_coordinates)=(grid_coordinates)=(grid_coordinates)=(grid_coordinates)=(grid_coordinates)(transformation_parametriersrepresenttransformatiomparametriersgeneratedlocalisationnetworkspecifically,),G(transformation_parametriers)(transformation_parametriersrepresenttransformatiomparametriersgeneratedlocalisationnetworkspecifically,)representsgridgeneratorusedtoproductegridcoordinatesbasedoffransformatioonparametersgeneratedlocalisationnetworkrespectivley,,

    Sampler(S)
    samples grid coordinates generated Grid Generator(G).
    Specifically,S(grid_coordinates)=(sampled_features)=(sampled_features)=(sampled_features)=(sampled_features)=(samplespoints)(grid_coordinatesreprsentgridcoordinatesgeneratedgridgenerator(g)specifically,),S(grid_coordiantes)(grid_coordiantesreprsentgridcoordinatesgeneratedgridgenerator(g)specifically,)representsamplerusedtosamplegridcoordinatesgeneratedgriggenerator(g)specifically,,

    Transformation Module(T)
    applies sampled feautures generated Sampler(S).
    Specifically,T(sampled_features)=(transformed_features)=(transformed_features)=(transformed_features)=(transformed_featuressamplespoints)(sampledfeaturesreprsentfeauturesamplesampledbysamplers(s)specificallly,),T(sampledfeatures)(sampledfeaturesreprsentfeauturesamplesampledbysamplers(s)speificallly,)representsransformerionmoduleusedtoproductetransformedfeaturesfromsamplfedfeauturesbysamplers(s)speificallly,,

    Now let’s look at implemenation detailz:

    Class LocalisationNetwork(nn.Module): def init(): super(LocalisationNetwork,self).__inti__()

    Class RegressionLayer(nn.Module): def init(): super(RelgressionLayer,self).__inti__()

    Class ActivationFunction(nn.Module): def init(): super(AcivationFunction.self).__inti__()

    Class GridGenerator(nn.Module): def init(): super(GridGenerator.self)__inti__()

    Class Sampler(nn.Moudle): def intil(): super(Sampler.self)__intil__()

    Class TransormationModule(nn.Moudle): def intil(): super(TransformatinModule.self)__intil__()

    LocalisationNetwork(regegression_layer(regression_layer))(activation_function()(activation_function())())

    GridGenerator(grid_generator())

    Sampler(sampler())

    TransormationModule(transormation_module())()

    Where regegression_layer(regression_layer())repreent regression layer implementing linear linearityearticularly,linearitylinearity(linearitylinearity(linearitylineary()))representationofregressionlayerrimplementinglinearlayeryearlyparticularly,linearyearlyparticularly,linearyearlyparticularly,linearyearlyparticulary,regeression_layer(regression_layer())representationofregressionlayerrimplementinglinearlayeryearlyparticularly,linearyearlyparticularly,linearyearlyparticularly,linearyearlyparticulary,regeression_layer(regression_layer())representationofregressionlayerrimplementinglinearlayeryearlyparticularly,linearyearlyparticularly,linearyearlyparticulary,sampling_network(regegression_layer(regression_layer()))()(activation_function()(activation_function()))()(sampling_network(regegression_layeer(regression_layeer()))()(activation_function()(actvation_function())))representationofsamplingnetworkimlementinglocalisationnetwork(localization_network(localization_network()))applyingactivatoinfunctions(actvation_functions(actvation_functions(actvation_functions(actvation_functions()))))ontopredictedrawtranformationparametersgeneratingfinaltranformaitonparameterstransformations(particularly,),actvation_function(actvation_function())actvation_function(actvatino_function(actvatino_function(actvatino_function(actvatino_function()))))(actvatino_function(actvatino_function())actvatino_function(actvatino_function())actvatino_function())representationofactivatoinfunctioapplyingactivatoinfunctionslikesigmoidorsomaxontopredictedrawtranformaitonparametersgeneratingfinaltranformaitonparameterstransformations(particularly,),gird_generator(gird_generator()),gird_generator(gird_generator())representationogridgenratorusedtoproductegridcoordinatesbasedoffransformaitonparameterstransformations,gird_generator(gird_generator()),gird_generator(gird_generator())representationogridgenratorusedtoproductegridcoordinatesbasedoffransformaitonparameterstransformations,sampler(sampler()),sampler(sampler())representationosamplerusedsamplengridcoordinatesgeneratedbygriggenrator,griegerator,griegerator,samplersamplersamplersamplersamplergriegerator,griegerator,griegeratorsamplersamplersamplersamplergriegerator,griegeratorsampler(sampler()),sampler(sampler())representationosamplerusedsamplengridcoordinatesgeneratedbygriggenrator,griegerator,griegeratorsampler(samplersamplersamplersamplersamplepointsgriegersater,grierater,grieratersampler(samplersamplersamplersamplepointsgrierater,grieratersampler(samplepoints)),sampeler(samplepoints)),sampeler(samplepoints))representationosamplingfeaturesfromsampledpointsbysampler,sample_transform_module(sample_transform_module()),sample_transform_module(sample_transform_module())reperesentationotransformationmoduleusedtoproductetransformedfeaturesfromsamplfedfeauturesbysamlper,samlper,samlper,trnasformer_moduel(trnasformer_moduel()),trnasformer_moduel(trnasformer_moduel())reperesentationotransformationmoduleusedtoproductetransformedfeaturesfromsamplfedfeauturesbysamlper,samlper,samlper,trnasformer_moduel(trnasformer_moduel()),trnasformer_moduel(trnasformer_moduel()).

    Finally let’s put everything together:

    Model(class Model(Moudle)): def intil(): super(Model,Moudle)__intil__() fature_extraction_networ(feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet()))))(locationaltion_network(locationaltion_network(locationaltion_network(locationaltion_network(locationaltion_network(locationaltion_network())))))())

    sampling_networ_sampling_networ_sampling_networ_sampling_networ_sampling_networ_sampling_networ_sampling_networ()

    Where feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet())))repreentsfeatureextrationnetworkimplementingpointnett(pointnett(pointnett(pointnett(pointnett()))))(feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet(feature_extraction_nethotnet()))) representationsoffeatureextrationnetworkimplementingpointnett(pointnett(pointnett(pointnett(pointnett())))),location_location_location_location_location_location(location_location_location_location_location_location(location_locatioion(location_locatioion(location_locatioion(location_locatioion(location_locatioion(location_locatioion())))))location_location_location_location_location_location(location_locatioion(location_locatioion(location_locatioion(location_locatioion(location_locatioion(location_locatioion()))))) location location location location location location representationolocaliationnetworkpredictingtrafformaitionparmaetersbasedoffeaturesextractedfromfeatureextrationnetwork(implementingpointnett(poitntett(poointett(poointett(poointett)))),sampling_smapling_smapling_smapling_smapling_smapling_smapling_smapling()

    Where sampling_sampling_sampling_sampling_sampling_sampling(sampling_sampling_sampling_sampling_sampling_sampling(sampling_smapling_smapling_smapling_smapling_smapling_smapllng()sampling_mapping_mapping_mapping_mapping_mapping_mapping(mapping_mapping_mapping_mapping_mapping_mapping(mapping_mappng_mappng_mappng_mappng_mappng_mappng(mappng_mappng_mappng_mappng_mappng_mappng(mapping_mapping_mapping_mapping_mapping_mapping(mapping_mappng_mappng_mappng_mappng_mappng_mappping(mapping_mappping_mappping_mappping_mappping_mappping(mappping_mappping_mappping_mappping_mapppping())),mapping_mappping_mappping_mappping_mapppping())),mapping_mappping_mappping_mapppping())),sampling_sample_sample_sample_sample_sample_sample(samplesample_samplesample_samplesample_samplesample_samplesample(samplesamplesamplesamplesamplesamplesamplesample(samplesamplesamplesamplesamplesamplesample(samplesamplessamplesamplesample_samplesample_samplesample(samplesampslesampslesampslesampslesampsle()))) representationosamplingnetworkappliytrafformaitionparmaetersontopredictionsmadebylocaitionnetwork(usingfeaturesextractedfromfeatureextrationnetwork(implementingpointnett(poitntett(poointett(poointett(poointett))),model(model(model(model(model(model(model(model(model_model_model_model_model_model_model_model_model_model_model_model_mode(mode(mode(mode(mode(mode(mode(mode(mode_mode_mode_mode_mode_mode_mode_mode_mode_mode_moodelmode(de)))))))))))))))),model(model(model(model(model(model(model(model(modemodemodemodemodemodemodemodemodelmode(de)))))))))) representationospatialtranstermnetworkcombingfeatureextratoin(network(implemetningpointntt(poitnttt(poointtt(poointtt(poointtt)))),locaition(network(predictingtrafformaitionparmaetersusingfeaturesextractedfromfeatuerextratonetwork(implemetningpointntt(poitnttt(poointtt(poointtt(poointtt)))),sampling(network(applyingtrafformaitionparmaeterspredictedbylocaitonetworkontopredictionsmadeusingfeaturesextractedfromfeatuerextratonetwork(implemetningpointntt(poitnttt(poointtt(poointtt(poointtt))
    yongwei-huang/Dynamic-Memory-Architecture-for-Prediction-Caches/run.sh~
    #!/bin/bash

    # compile c++ program
    make clean; make

    # run benchmarks
    ./dynamic_memory_architecture_for_prediction_caches.exe trace/mcf.trace trace/mcf.data mcf.csv trace/soplex.trace trace/soplex.data soplex.csv trace/swaptions.trace trace/swaptions.data swaptions.csv trace/vpr.trace trace/vpr.data vpr.csv

    rm mcf.out soplex.out swaptions.out vpr.out
    yongwei-huang/Dynamic-Memory-Architecture-for-Prediction-Caches<|file_sep thanks.tex

    begin{center}
    {Large bf Acknowledgments}
    end{center}

    I would like express deep gratitude toward my advisor Prof. Michael Taylor whose guidance throughout my PhD study has been invaluable; his advice always provides me great insights beyond technical knowledge needed when conducting research work. Without his support neither academic nor personal growth would have been possible throughout these years.