Skip to main content
Главная страница » Football » Karacabey Belediye Spor (Turkey)

Karacabey Belediye Spor: Squad, Achievements & Stats in the Black Sea League

Overview of Karacabey Belediye Spor

Karacabey Belediye Spor, a prominent football team based in the Karacabey district of Manisa, Turkey, competes in the Turkish Regional Amateur League. Founded in 1967, the team is managed by Coach Ahmet Yıldız and has established itself as a competitive force within its league.

Team History and Achievements

Over the years, Karacabey Belediye Spor has carved out a respectable niche in Turkish football. While they have not clinched major national titles, their consistent performance in regional competitions has earned them recognition. Notable seasons include several top finishes in their league standings.

Current Squad and Key Players

The squad boasts talented players like Cemal Kaya (Forward) and Emre Gürkan (Midfielder), both known for their strategic plays and goal-scoring abilities. Their current roster features a blend of experienced veterans and promising young talents.

Key Players

  • Cemal Kaya – Forward – Known for his agility and sharp shooting.
  • Emre Gürkan – Midfielder – Renowned for his playmaking skills.

Team Playing Style and Tactics

Karacabey Belediye Spor typically employs a 4-3-3 formation, emphasizing possession-based play with quick transitions from defense to attack. Their strengths lie in their cohesive midfield control and dynamic forward line, while their weaknesses include occasional lapses in defensive organization.

Tactics Overview

  • Formation: 4-3-3
  • Strengths: Midfield dominance, quick counterattacks
  • Weaknesses: Defensive vulnerabilities under pressure

Interesting Facts and Unique Traits

Fans affectionately call the team “The Lions of Karacabey,” reflecting their fierce playing style. The club’s rivalry with nearby teams adds an extra layer of excitement to their matches. Traditions such as pre-match fan gatherings highlight the strong community support for the team.

Rivalries & Traditions

  • Nickname: The Lions of Karacabey
  • Rival Teams: Local district rivals that fuel competitive spirit.

Lists & Rankings of Players, Stats, or Performance Metrics


Name Position Avg Goals per Match 🎰
Cemal Kaya Forward 0.8
Emre Gürkan Midfielder N/A

Comparisons with Other Teams in the League or Division

Karacabey Belediye Spor is often compared to other regional powerhouses due to its balanced squad and tactical flexibility. While some teams may have star-studded lineups, Karacabey’s strength lies in teamwork and strategic execution.

Case Studies or Notable Matches

A breakthrough game was their victory against a top-tier regional opponent last season, which showcased their potential on a larger stage. This match highlighted key players stepping up during critical moments.

Betting Analysis Tables: Team Stats & Recent Form

Last 5 Matches Results (W/L/D)
L W L W W
Average Goals Scored per Match 💡
1.6

Last 5 Matches Results (W/L/D)
L W L W W
Average Goals Scored per Match 💡
1.6

Odds Comparison with Rivals 🎰✅❌💡🔍📈📉⚽️🔥🛡️⚔️💯💸💵🏆🏅🥇🥈🥉⭐️✨💣💪😎😱😍😤😮😱😂😭😞😠☹️😀☺️❤️💔💕💖💗❤️‍🔥❣️💓💞♥️♡♢♧✌️✊✋✌️✔️✅❌☑️☑✔✔✔✔✔✔✔✔✔✔✔

Recent Head-to-Head Records Against Top Rivals ⚽️⚽⚽⚽⚽⚽⚽⚽⚽⚽

Karacabey vs Opponent A
– Wins: 3
– Losses: 1
– Draws: 1
– Avg Goals Scored: 1.8
– Avg Goals Conceded: 0.9
Karacabey vs Opponent B
– Wins: 4
– Losses: 0
– Draws: 1
– Avg Goals Scored: 2.0
– Avg Goals Conceded: 0.7
Odds Comparison with Rivals ⚽️

Karacabey vs Opponent A Odds:
+150 on Win
+200 on Draw
+250 on Loss
Karacabey vs Opponent B Odds:
+180 on Win
+220 on Draw
+260 on Loss
</tsarahjonesmiller/Neuroimage/README.md
# Neuroimage
Analysis code for Sarah Miller’s neuroimaging work at Duke University

## Table of Contents
* [Data](#data)
* [Analysis](#analysis)
* [Results](#results)

## Data
All data are stored locally at `/Volumes/Seagate Backup Plus Drive/Neuroimage`.

### Dataset
The dataset used here consists of two groups:
* **Control**: healthy subjects without any history of psychiatric disorder or neurological disease.
* **Bipolar**: patients diagnosed with bipolar disorder.

Each subject underwent three types of fMRI scans:
* Resting state fMRI scan (`rsfmri`): Subjects were instructed to keep their eyes open but not focus on anything specific.
* Auditory oddball task (`auditory`): Subjects heard tones at two different frequencies; they were instructed to press a button when they heard one tone but not the other.
* Working memory task (`wm`): Subjects viewed sequences of numbers presented one at a time; they were instructed to remember whether each number was odd or even.

### Preprocessing

Preprocessing was performed using FSL v6 ([FMRIB Software Library](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSL)). Preprocessed data are available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/preprocessed_data`.

The preprocessing steps performed were:

#### Slice timing correction
Performed using `slicetimer`

#### Motion correction
Performed using `mcflirt`

#### Spatial normalization
Performed using `flirt` (for linear registration) followed by `fnirt` (for nonlinear registration). Images were normalized into MNI space using MNI152 template.

#### Smoothing
Performed using `fslmaths`. A Gaussian kernel was used with FWHM = 6 mm.

#### Artifact removal
Performed using AFNI’s [Artifact Detection Tools](http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dArtifactDetect.html).

## Analysis
All analysis code is written in Python v3.

### Functional connectivity

Functional connectivity analysis was performed using nilearn ([nilearn.github.io](https://nilearn.github.io)). Code is available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/code/nilearn_connectivity_analysis.py`.

#### Correlation matrix

For each subject, correlation matrices were calculated between every pair of regions-of-interest (ROIs) defined by Harvard-Oxford atlas ([Harvard-Oxford cortical structural atlas](http://www.cma.mgh.harvard.edu/atlas/hocortical_structural_atlas.htm)).

#### Connectivity statistics

Mean correlations across all ROIs were calculated separately for each group (control/bipolar) for each type of fMRI scan (resting state/auditory task/working memory task).

Differences between groups were tested using t-tests assuming equal variances.

### ROI-based analysis

ROI-based analysis was performed using nilearn ([nilearn.github.io](https://nilearn.github.io)). Code is available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/code/nilearn_roi_analysis.py`.

#### Masking

For each subject, images were masked according to Harvard-Oxford atlas regions-of-interest (ROIs). All voxels outside these ROIs were set to zero.

#### Mean signal extraction

For each ROI defined by Harvard-Oxford atlas:
* For each subject:
* Calculate mean signal across all voxels within ROI across all volumes.
* Save mean signal values into .csv file.

#### Statistics

Mean signal values across subjects within each group were calculated separately for each type of fMRI scan (resting state/auditory task/working memory task).

Differences between groups were tested using t-tests assuming equal variances.

## Results
Results are saved locally at `/Volumes/Seagate Backup Plus Drive/Neuroimage/results`.

### Functional connectivity

Correlation matrices between every pair of regions-of-interest are saved locally as .csv files at `/Volumes/Seagate Backup Plus Drive/Neuroimage/results/corr_matrices`. These files are named according to subject ID followed by type of fMRI scan.

Example file name:

`subject_001_rs.csv`

Plots showing mean correlation values across all ROIs for resting state scans are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/corr_mean_plots`. These plots show mean correlation values separately for control subjects and bipolar patients along with error bars representing standard error.

Example file name:

`corr_mean_rest.png`

Plots showing differences between groups are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/corr_diff_plots`. These plots show difference scores separately for resting state scans along with error bars representing standard error.

Example file name:

`corr_diff_rest.png`

### ROI-based analysis

Plots showing mean signal values across all voxels within each region-of-interest defined by Harvard-Oxford atlas are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/signal_mean_plots`. These plots show mean signal values separately for control subjects and bipolar patients along with error bars representing standard error.

Example file name:

`signal_mean_control_rs.png`

Plots showing differences between groups are saved locally as .png files at `/VolumesSeagate Backup Plus Drive/neuroimage/results/signal_diff_plots`. These plots show difference scores separately for resting state scans along with error bars representing standard error.

Example file name:

`signal_diff_control_rs.png`
sarahjonesmiller/neuroimage<|file_sep[//]: # "Code description"

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g pd.read_csv)
import matplotlib.pyplot as plt # plotting libraries

def plot_corr_means(df_list):
'''
Description:
Plot means + std err

Input:
df_list = list containing dataframe objects

Output:
Saves plot image (.png)
'''

group_labels = ['Control', 'Bipolar']
group_colors = ['b', 'r']
x_labels = ['Rest', 'Auditory', 'WM']

for i,g_df in enumerate(df_list):

plt.errorbar(x_labels,
g_df['mean'],
yerr=g_df['std_err'],
fmt='o',
color=group_colors[i],
label=group_labels[i])

plt.title('Mean Correlation Values')
plt.ylabel('Mean Correlation')
plt.xlabel('Scan Type')
plt.legend()
plt.savefig('corr_mean_plot.png')
plt.show()

def plot_corr_diff(df_list):
'''
Description:
Plot means + std err

Input:
df_list = list containing dataframe objects

Output:
Saves plot image (.png)
'''

x_labels = ['Rest', 'Auditory', 'WM']

for i,g_df in enumerate(df_list):

plt.errorbar(x_labels,
g_df['mean'],
yerr=g_df['std_err'],
fmt='o',
color='k',
label=group_labels[i])

plt.title('Difference Scores')
plt.ylabel('Difference Score')
plt.xlabel('Scan Type')
plt.legend()
plt.savefig('corr_diff_plot.png')
plt.show()

if __name__ == '__main__':

df_control_rs = pd.read_csv('../results/corr_means/control/rs.csv')
df_bipolar_rs = pd.read_csv('../results/corr_means/bipolar/rs.csv')

df_control_auditory = pd.read_csv('../results/corr_means/control/audio.csv')
df_bipolar_auditory = pd.read_csv('../results/corr_means/bipolar/audio.csv')

df_control_wm = pd.read_csv('../results/corr_means/control/wm.csv')
df_bipolar_wm = pd.read_csv('../results/corr_means/bipolar/wm.csv')

df_control_rs['group'] = 'control'
df_bipolar_rs['group'] = 'bipolar'

df_control_auditory['group'] = 'control'
df_bipolar_auditory['group'] = 'bipolar'

df_control_wm['group'] = 'control'
df_bipolar_wm['group'] = 'bipolar'

corr_dfs_list=[]
corr_dfs_list.append(df_control_rs)
corr_dfs_list.append(df_bipolar_rs)

corr_dfs_list.append(df_control_auditory)
corr_dfs_list.append(df_bipolar_auditory)

corr_dfs_list.append(df_control_wm)
corr_dfs_list.append(df_bipolar_wm)

diff_control_rs=pd.read_csv('../results/diff_scores/control/rs.csv') # diff scores control rest vs bipol rest
diff_bipolar_rs=pd.read_csv('../results/diff_scores/bipolar/rs.csv') # diff scores bipol rest vs bipol auditory
diff_control_audio=pd.read_csv('../results/diff_scores/control/audio.csv') # diff scores control auditory vs bipol auditory
diff_bipolar_audio=pd.read_csv('../results/diff_scores/bipolar/audio.csv') # diff scores bipol auditory vs bipol wm

diff_dfs_list=[]
diff_dfs_list.append(diff_control_rs)
diff_dfs_list.append(diff_bipolar_rs)

diff_dfs_list.append(diff_control_audio)
diff_dfs_list.append(diff_bipolar_audio)

group_label=['Control','BIPOLAR']
color=['blue','red']

def corr_stats_plotter(corr_group):

control_rows=corr_group[corr_group.group=='control'].reset_index(drop=True)
bpd_rows=corr_group[corr_group.group=='bpd'].reset_index(drop=True)

control_rows.plot(kind='bar',y='mean',x='scan_type',yerr='std_err',color=color[0],label=group_label[0],title='Mean Correlation Values',legend=True,error_kw=dict(ecolor=color[0]))

bpd_rows.plot(kind='bar',y='mean',x='scan_type',yerr='std_err',color=color[1],label=group_label[1],title='Mean Correlation Values',legend=True,error_kw=dict(ecolor=color[1]))

def diff_stats_plotter(diff_group):

control_rows=diff_group[diff_group.group=='control'].reset_index(drop=True)
bpd_rows=diff_group[diff_group.group=='bpd'].reset_index(drop=True)

control_rows.plot(kind='bar',y='mean_difference_score',x='scan_type_pairwise_comparison_between_groups_of_scan_types_in_same_subjects_within_same_groups_of_subjects',
yerr='std_err_difference_score',
color=color[0],
label=group_label[0],
title='Difference Scores Between Groups Within Each Scan Type',
error_kw=dict(ecolor=color[0]),
alpha=.5,
width=.25,
position=-.25)

bpd_rows.plot(kind='bar',
y='mean_difference_score',
x=
'scan_type_pairwise_comparison_between_groups_of_scan_types_in_same_subjects_within_same_groups_of_subjects',
yerr=
'std_err_difference_score',
color=
color[
1],
label=
group_label[
1],
title=
'Difference Scores Between Groups Within Each Scan Type',
error_kw=
dict(
ecolor=
color[
1]),
alpha=.5,
width=.25,
position=.25)

for corr_gdf,diff_gdf in zip(corr_dfs_list,diff_dfs_list):

print(corr_gdf.head(10))

def corr_stats_table_maker(corr_group):

df=pd.DataFrame(columns=['scan_type','num_of_regions_in_each_ROI','num_of_subj','num_of_voxels_in_each_region','num_of_voxels_in_each_ROI','total_num_of_voxels_for_each_subj'])

df.loc[:,'scan_type']=np.unique(corr_group.scan_type)[::-1]

def num_regions_per_roi():

unique_rois=np.unique(corr_group.roi_name)[::-1]

num_regions_per_roi=[]

roi_names=[]

print(unique_rois.shape)

roi_shape=np.zeros((unique_rois.shape))

print(roi_shape.shape)

roi_counter=0

def region_counter():

global roi_counter

counter_val=np.zeros((unique_rois.shape))

if np.array_equal(unique_rois[counter_val],unique_rois[counter_val+roi_counter]):

counter_val+=counter_val+roi_counter+counter_val+roi_counter+counter_val+roi_counter

else:

counter_val+=counter_val+roi_counter

return counter_val

while roi_counter10000*np.max(sub_df.sub_image[sub_df.sub_image!=np.nan])
for _,sub_df in corr_group.groupby([‘subject_id’])])
df.loc[:,’num_of_voxels_in_each_region’]=num_voxels()

def num_voxels_in_ROI():

return np.sum([np.sum([np.count_nonzero(subsub_df.sub_image)>10000*np.max(subsub_df.sub_image[subsub_df.sub_image!=np.nan])
for _,subsub_df in sub_df.groupby([‘ROI’])])
for _,sub_df in corr_group.groupby([‘subject_id’])])

df.loc[:,’num_of_voxels_in_each_ROI’]=num_voxels_in_ROI()

def total_num_voxel():

return np.sum([np.sum([np.count_nonzero(subsubsub_df.sub_image)>10000*np.max(subsubsub_df.sub_image[subsubsub_df.sub_image!=np.nan])
for _,subsubsub_df in subsub_df.groupby([‘ROI’])])
for _,subsub_df in sub_df.groupby([‘subject_id’])])
for _, sub_df in corr_group.groupby([‘scan_type’])

df.loc[:,’total_num_of_voxels_for_each_subj’]=total_num_voxel()

return df

def make_corr_stats_table(corr_gdf):

# Code description

This script performs functional connectivity analysis on preprocessed resting-state fMRI data from healthy controls and patients diagnosed with bipolar disorder.

## Setup

Run this script from terminal after navigating to its location:

cd /path/to/script/location/

Then run:

python nilearn_connectivity_analysis.py –help

You should see output similar to:

usage: nilearn_connectivity_analysis.py [-h] [–subjects SUBJECTS [SUBJECTS …]]
[–subjects_dir SUBJECTS_DIR]

optional arguments:
-h, –help show this help message and exit
–subjects SUBJECTS [SUBJECTS …]
–subjects_dir SUBJECTS_DIR

To run the script use:

python nilearn_connectivity_analysis.py –subjects SUBJ_LIST
–subjects_dir SUBJ_DIR_PATH
–output_dir OUTPUT_DIR_PATH
–analysis_mode AN_MODE
–atlas_path ATLAS_PATH
–save_matlab MAT_SAVE_BOOL
–save_nifti NIFTI_SAVE_BOOL
–save_nifti_mask NIFTI_MASK_SAVE_BOOL

where:

SUBJ_LIST is either “control” or “biped”

SUBJ_DIR_PATH is path where preprocessed data is located

OUTPUT_DIR_PATH is path where results will be stored

AN_MODE specifies whether you want to perform functional connectivity (“fc”) or ROI-based (“roibased”) analysis

ATLAS_PATH specifies path where Harvard Oxford cortical structural atlas can be found

MAT_SAVE_BOOL specifies whether you want save functional connectivity matrices (.mat format) (.mat format)

NIFTI_SAVE_BOOL specifies whether you want save functional connectivity matrices (.nii.gz format)

NIFTI_MASK_SAVE_BOOL specifies whether you want save mask images (.nii.gz format)

Example command:

python nilearn_connectivity_analysis.py –subjects control,biped
–subjects_dir /Users/Sarah/Desktop/neuroimaging/data/preprocessed_data
–output_dir /Users/Sarah/Desktop/neuroimaging/results/fc_matrices
–analysis_mode fc
–atlas_path /Users/Sarah/Desktop/neuroimaging/data/HarvardOxford-cortical-l-maxprob-thr50.nii.gz
–save_matlab True
–save_nifti False
–save_nifti_mask False

Note that paths specified above need to be changed according your own directory structure.

## Output

Functional connectivity matrices will be stored locally under RESULTS_DIR_PATH/MATLAB_FILES folder if MAT_SAVE_BOOL==True.

Mask images will be stored locally under RESULTS_DIR_PATH/NIFTI_FILES folder if NIFTI_MASK_SAVE_BOOL==True.

If NIFTI_SAVE_BOOL==True then functional connectivity matrices will also be stored under RESULTS_DIR_PATH/NIFTI_FILES folder.

sarahjonesmiller/neuroimage<|file_sep**This repository contains code written by Sarah Jones-Miller during her time working under Dr James Giordano**

**Code description**

This script performs ROI-based analysis on preprocessed fMRI data from healthy controls and patients diagnosed with bipolar disorder.

**Setup**

Run this script from terminal after navigating to its location:

cd /path/to/script/location/

Then run:

python nilearn_roi_analysis.py -h

You should see output similar to:

usage: nilearn_roi_analysis.py [-h] [–subjects SUBJECTS [SUBJECTS …]]
[–atlas_path ATLAS_PATH] [–output_dir OUTPUT_DIR]

optional arguments:
-h, –help show this help message and exit
–subjects SUBJECTS [SUBJECTS …]
–atlas_path ATLAS_PATH
–output_dir OUTPUT_DIR

To run the script use:

python nilearn_roi_analysis.py \
–subjects SUBJ_LIST \
–atlas_path ATLAS_PATH \
–output_dir OUTPUT_DIR \

where:

SUBJ_LIST is either "control" or "biped"

ATLAS_PATH specifies path where Harvard Oxford cortical structural atlas can be found

OUTPUT_DIR specifies path where results will be stored

Example command:

python nilearn_roi_analysis.py \
–subjects control,biped \
–atlas_path /Users/Sarah/Desktop/neuroimaging/data/HarvardOxford-cortical-l-maxprob-thr50.nii.gz \
–output_dir /Users/Sarah/Desktop/neuroimaging/results/signal_means \

Note that paths specified above need to be changed according your own directory structure.

**Output**

Mean signal values will be stored locally under OUTPUT_DIR folder.<|file_sep[//]: # "Code description"

import os # operating system libraries
import numpy as np # linear algebra libraries
import pandas as pd # data processing libraries
from argparse import ArgumentParser # argument parsing library

from sklearn.model_selection import train_test_split

from sklearn.preprocessing import StandardScaler

from sklearn.svm import SVC

from sklearn.metrics import confusion_matrix

from matplotlib import pyplot as plt

parser=ArgumentParser(description=("This script performs machine learning classification"
+"using Support Vector Machines"))

parser.add_argument("–data_file",
required=False,
default=None,
help=("Path where feature matrix"
+"and target vector can"
+"be found"))

parser.add_argument("–scaler",
required=False,
default=None,
help=("Scaler object used"
+"to normalize feature"
+"matrix"))

parser.add_argument("–svm_model",
required=False,
default=None,
help=("Trained SVM model"))

args,_=parser.parse_known_args()

class SVMClassifier(object):

"""
Class docstring

"""

def __init__(self,data_file=None,scale_obj=None,model_obj=None):

self.data_file=data_file

self.scale_obj=scale_obj

self.model_obj=model_obj

def load_data(self,data_file=None):

"""
Load feature matrix X,y vector y

Input:

None

Output:

X,y

"""

if self.data_file==None:data_file=self.data_file

X=np.loadtxt(data_file[:,:-4].astype(float),delimiter=',')

y=np.loadtxt(data_file[:,-4].astype(int),delimiter=',')

return X,y

def scale_features(self,X,scale_obj=None):

"""
Scale features

Input:

X : array-like , shape=(n_samples,n_features)

scale_obj : scaler object

Output :

scaled_X

"""

if scale_obj==None:self.scale_obj=self.scale_obj

scaled_X=self.scale_obj.fit_transform(X=X.copy())

return scaled_X

def fit_svm_model(self,X,y,scale_obj=None,model_obj=None,C_value=.01,kernel_type='.rbf'):

"""

Fit SVM model

Input :

X : array-like , shape=(n_samples,n_features),

Feature matrix

y : array-like , shape=(n_samples,)

Target vector

scale_obj : scaler object

Object used to normalize features

model_obj : model object

Trained SVM model

C_value : float

Penalty parameter C value

kernel_type : str

Kernel function type ('linear'/'rbf'/etc.)

"""

if scale_obj==None:self.scale_obj=self.scale_obj

if model_obj==None:self.model_svm=SVC(C=C_value,kernel=kernel_type,gamma=.001,max_iter=-10000000000000000000000000,tol=.001,class_weight={})

scaled_X=self.scale_features(X=X,scale_object=self.scale_object.copy())

self.model_svm.fit(scaled_X,y=y.ravel())

return self.model_svm

def predict(self,X,scale_object=None,model_object=None):

"""

Predict labels given test set X

Input :

X : array-like , shape=(n_samples,n_features),

Test set feature matrix

scale_object : scaler object

Object used to normalize features

model_object : trained model object

"""

if scale_object==None:self.scale_object=self.scale_object

if model_object==None:model_object=self.model_svm

scaled_X=self.scale_features(X=X.copy(),scale_object=scale_object.copy())

predicted_y=model.predict(scaled_X).copy()

return predicted_y

def evaluate_model(self,X,y,predicted_y,scale_object=None,model_object=None):

"""

Evaluate performance metrics given test set X,y predicted_y

Input :

X : array-like , shape=(n_samples,n_features),

Test set feature matrix

y : array-like , shape=(n_samples,)

Test set target vector

predicted_y : array-like , shape=(n_samples,)

Predicted labels

scale_object : scaler object

Object used normalize features

model_object : trained model object

"""

if scale_object==None:self.scale_object=self.scale_object

if model_object==None:model=model_svm

scaled_X=self.scale_features(X=X.copy(),scale_objects=scale_objects.copy())

cm_=confusion_matrix(y_true=y.ravel(),y_pred=predicted_y.ravel())

tn_,fp_,fn_,tp_=cm_[:,].ravel().copy()

accuracy_=(tp_+tn_)/(tp_+fp_+tn_+fn_)

precision_=tp/(tp_+fp_)

recall_=tp/(tp_+fn_)

specificity_=tn/(tn_+fp_)

fscore_=((precision_*recall_)*(precision_*recall_))/((precision_/recall_)+(precision_*recall_))

sarahjonesmiller/neuroimage=zscore_threshold

return artifact_map

def compute_resampled_residue(residue,replacement_values):

“””
Description:
Compute resampled residue replacing outlier voxels

Input:

residue,replacement_values

Output:

resampled_residue
“””

artifacts=detect_artifacts(residuall,zscore_threshold)

artifacts_indices=[np.where(arl)[::]for arl inn artifacts]

replacement_indices=[list(zip(*arl))[::]for arlinn artifacts_indices]

replacement_values=replacement_values.flatten()[::]

artifacts_replaced=residuall.copy()

for rep_idx,repl_idx inn zip(replacement_indices,replacement_values):

artifacts_replaced[tuple(rep_idx)]=repl_idx

return artifacts_replaced

def replace_outliers_with_median(residue):

“””
Description:

Replace outliers with median value

Input:

residue

Output:

outlier-replaced residue
“””

median_value=residue.median()

outlier_replaced_residue=residue.copy()

outlier_replaced_residue[detect_artifacts(residuall,zscore_threshold)]median_value

return outlier_replaced_residue

def replace_outliers_with_interpolation(residue):

“””
Description:

Replace outliers via interpolation

Input:

residue

Output:

interpolated residue
“””

interpolated_residue=residue.copy()

interpolated_residue[detect_artifacts(residuall,zscore_threshold)]numpy.nanmedian(neighbourhood_interpolation_neighbours(interpolated_residue))

return interpolated_reside

def neighbourhood_interpolation_neighbours(interpolated_reside):

“””
Description:

Find neighbours within interpolation radius

Input:

interpolated_reside,radius

Output:

neighbourhood neighbours indices
“””

neighbourhood_neighbours=[[(xi,j,k)for xiin range(i-radius,i+radiuss(j,k)in range(j-radius,j+radiuss(k,l)in range(k-radius,k+radiuss))]for i,j,kinn interpolated_reside.shape]

return neighbourhood_neighbours

def detect_motion(motion_parameters,motion_parameters_threshold):

“””
Description:

Detect motion based motion parameters threshold

Input:

motion_parameters,motion_parameters_threshold

Output:

motion_detected_flag
“””

motion_detected_flag=motion_parameters>motions_parameters_threshold

return motion_detected_flag

def apply_temporal_filter(raw_time_series,time_series_filter_kernel):

“””
Description:

Apply temporal filter kernel over raw time series

Input:

raw_time_series,time_series_filter_kernel

Output:

filtered_time_series
“””

filtered_time_series=time_series_filter_kernel(raw_time_series)

return filtered_time_series

def compute_linear_regression_coefficients(input,output):

“””
Description:

Compute linear regression coefficients

input,output=time series input,output vectors respectively.

Output:

coefficients,a,b,c,d,e,f.
“””

a,b=numpy.polyfit(input,output[:,],deg=l)
c,d=numpy.polyfit(input,output[:,],deg=l)
e,f=numpy.polyfit(input,output[:,],deg=l)
g,h=numpy.polyfit(input,output[:,],deg=l)

coefficients=a,b,c,d,e,f,g,h

return coefficients

def compute_linear_regression_residual(coefficients,input):


Description:”

Compute linear regression residuals given coefficients,input vectors.

input=input time series input vector.

coefficients=time series coefficients.

Output:”

linear regression residuals.

linear_regression_residual=input-coefficients

return linear_regression_residual

function apply_temporal_filter(raw_time_series,time_series_filter_kernel):

© Betwhale, 2025. All Rights Reserved betwhale Is Operating Under Gaming License That Was Given By The Autonomous Island Of Anjouan, Union Of Comoros. Government Notice No. 007 Of 2005 The Betting And Gaming Act 2005.