Overview of Karacabey Belediye Spor
Karacabey Belediye Spor, a prominent football team based in the Karacabey district of Manisa, Turkey, competes in the Turkish Regional Amateur League. Founded in 1967, the team is managed by Coach Ahmet Yıldız and has established itself as a competitive force within its league.
Team History and Achievements
Over the years, Karacabey Belediye Spor has carved out a respectable niche in Turkish football. While they have not clinched major national titles, their consistent performance in regional competitions has earned them recognition. Notable seasons include several top finishes in their league standings.
Current Squad and Key Players
The squad boasts talented players like Cemal Kaya (Forward) and Emre Gürkan (Midfielder), both known for their strategic plays and goal-scoring abilities. Their current roster features a blend of experienced veterans and promising young talents.
Key Players
- Cemal Kaya – Forward – Known for his agility and sharp shooting.
- Emre Gürkan – Midfielder – Renowned for his playmaking skills.
Team Playing Style and Tactics
Karacabey Belediye Spor typically employs a 4-3-3 formation, emphasizing possession-based play with quick transitions from defense to attack. Their strengths lie in their cohesive midfield control and dynamic forward line, while their weaknesses include occasional lapses in defensive organization.
Tactics Overview
- Formation: 4-3-3
- Strengths: Midfield dominance, quick counterattacks
- Weaknesses: Defensive vulnerabilities under pressure
Interesting Facts and Unique Traits
Fans affectionately call the team “The Lions of Karacabey,” reflecting their fierce playing style. The club’s rivalry with nearby teams adds an extra layer of excitement to their matches. Traditions such as pre-match fan gatherings highlight the strong community support for the team.
Rivalries & Traditions
- Nickname: The Lions of Karacabey
- Rival Teams: Local district rivals that fuel competitive spirit.
Lists & Rankings of Players, Stats, or Performance Metrics
| Name | Position | Avg Goals per Match 🎰 |
|---|---|---|
| Cemal Kaya | Forward | 0.8 |
| Emre Gürkan | Midfielder | N/A |
Comparisons with Other Teams in the League or Division
Karacabey Belediye Spor is often compared to other regional powerhouses due to its balanced squad and tactical flexibility. While some teams may have star-studded lineups, Karacabey’s strength lies in teamwork and strategic execution.
Case Studies or Notable Matches
A breakthrough game was their victory against a top-tier regional opponent last season, which showcased their potential on a larger stage. This match highlighted key players stepping up during critical moments.
Betting Analysis Tables: Team Stats & Recent Form
| Last 5 Matches Results (W/L/D) | |||||
|---|---|---|---|---|---|
| L W L W W | |||||
| Average Goals Scored per Match 💡 | |||||
| 1.6 | |||||
| Last 5 Matches Results (W/L/D) | |||||
|---|---|---|---|---|---|
| L W L W W | |||||
| Average Goals Scored per Match 💡 | |||||
| 1.6 | |||||
| Odds Comparison with Rivals 🎰✅❌💡🔍📈📉⚽️🔥🛡️⚔️💯💸💵🏆🏅🥇🥈🥉⭐️✨💣💪😎😱😍😤😮😱😂😭😞😠☹️😀☺️❤️💔💕💖💗❤️🔥❣️💓💞♥️♡♢♧✌️✊✋✌️✔️✅❌☑️☑✔✔✔✔✔✔✔✔✔✔✔ | Recent Head-to-Head Records Against Top Rivals ⚽️⚽⚽⚽⚽⚽⚽⚽⚽⚽ | ||||
|---|---|---|---|---|---|
| Karacabey vs Opponent A – Wins: 3 – Losses: 1 – Draws: 1 – Avg Goals Scored: 1.8 – Avg Goals Conceded: 0.9 |
|||||
| Karacabey vs Opponent B – Wins: 4 – Losses: 0 – Draws: 1 – Avg Goals Scored: 2.0 – Avg Goals Conceded: 0.7 |
Odds Comparison with Rivals ⚽️ | ||||
| Karacabey vs Opponent A Odds: +150 on Win +200 on Draw +250 on Loss |
|||||
| Karacabey vs Opponent B Odds: +180 on Win +220 on Draw +260 on Loss </tsarahjonesmiller/Neuroimage/README.md # Neuroimage Analysis code for Sarah Miller’s neuroimaging work at Duke University ## Table of Contents ## Data ### Dataset Each subject underwent three types of fMRI scans: ### Preprocessing Preprocessing was performed using FSL v6 ([FMRIB Software Library](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSL)). Preprocessed data are available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/preprocessed_data`. The preprocessing steps performed were: #### Slice timing correction #### Motion correction #### Spatial normalization #### Smoothing #### Artifact removal ## Analysis ### Functional connectivity Functional connectivity analysis was performed using nilearn ([nilearn.github.io](https://nilearn.github.io)). Code is available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/code/nilearn_connectivity_analysis.py`. #### Correlation matrix For each subject, correlation matrices were calculated between every pair of regions-of-interest (ROIs) defined by Harvard-Oxford atlas ([Harvard-Oxford cortical structural atlas](http://www.cma.mgh.harvard.edu/atlas/hocortical_structural_atlas.htm)). #### Connectivity statistics Mean correlations across all ROIs were calculated separately for each group (control/bipolar) for each type of fMRI scan (resting state/auditory task/working memory task). Differences between groups were tested using t-tests assuming equal variances. ### ROI-based analysis ROI-based analysis was performed using nilearn ([nilearn.github.io](https://nilearn.github.io)). Code is available at `/Volumes/Seagate Backup Plus Drive/Neuroimage/code/nilearn_roi_analysis.py`. #### Masking For each subject, images were masked according to Harvard-Oxford atlas regions-of-interest (ROIs). All voxels outside these ROIs were set to zero. #### Mean signal extraction For each ROI defined by Harvard-Oxford atlas: #### Statistics Mean signal values across subjects within each group were calculated separately for each type of fMRI scan (resting state/auditory task/working memory task). Differences between groups were tested using t-tests assuming equal variances. ## Results ### Functional connectivity Correlation matrices between every pair of regions-of-interest are saved locally as .csv files at `/Volumes/Seagate Backup Plus Drive/Neuroimage/results/corr_matrices`. These files are named according to subject ID followed by type of fMRI scan. Example file name: `subject_001_rs.csv` Plots showing mean correlation values across all ROIs for resting state scans are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/corr_mean_plots`. These plots show mean correlation values separately for control subjects and bipolar patients along with error bars representing standard error. Example file name: `corr_mean_rest.png` Plots showing differences between groups are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/corr_diff_plots`. These plots show difference scores separately for resting state scans along with error bars representing standard error. Example file name: `corr_diff_rest.png` ### ROI-based analysis Plots showing mean signal values across all voxels within each region-of-interest defined by Harvard-Oxford atlas are saved locally as .png files at `/Volumes/Seagate Backup Plus Drive/neuroimage/results/signal_mean_plots`. These plots show mean signal values separately for control subjects and bipolar patients along with error bars representing standard error. Example file name: `signal_mean_control_rs.png` Plots showing differences between groups are saved locally as .png files at `/VolumesSeagate Backup Plus Drive/neuroimage/results/signal_diff_plots`. These plots show difference scores separately for resting state scans along with error bars representing standard error. Example file name: `signal_diff_control_rs.png` import numpy as np # linear algebra def plot_corr_means(df_list): Input: Output: group_labels = ['Control', 'Bipolar'] for i,g_df in enumerate(df_list): plt.errorbar(x_labels, plt.title('Mean Correlation Values') def plot_corr_diff(df_list): Input: Output: x_labels = ['Rest', 'Auditory', 'WM'] for i,g_df in enumerate(df_list): plt.errorbar(x_labels, plt.title('Difference Scores') if __name__ == '__main__': df_control_rs = pd.read_csv('../results/corr_means/control/rs.csv') df_control_auditory = pd.read_csv('../results/corr_means/control/audio.csv') df_control_wm = pd.read_csv('../results/corr_means/control/wm.csv') df_control_rs['group'] = 'control' df_control_auditory['group'] = 'control' df_control_wm['group'] = 'control' corr_dfs_list=[] corr_dfs_list.append(df_control_auditory) corr_dfs_list.append(df_control_wm) diff_control_rs=pd.read_csv('../results/diff_scores/control/rs.csv') # diff scores control rest vs bipol rest diff_dfs_list=[] diff_dfs_list.append(diff_control_audio) group_label=['Control','BIPOLAR'] def corr_stats_plotter(corr_group): control_rows=corr_group[corr_group.group=='control'].reset_index(drop=True) control_rows.plot(kind='bar',y='mean',x='scan_type',yerr='std_err',color=color[0],label=group_label[0],title='Mean Correlation Values',legend=True,error_kw=dict(ecolor=color[0])) bpd_rows.plot(kind='bar',y='mean',x='scan_type',yerr='std_err',color=color[1],label=group_label[1],title='Mean Correlation Values',legend=True,error_kw=dict(ecolor=color[1])) def diff_stats_plotter(diff_group): control_rows=diff_group[diff_group.group=='control'].reset_index(drop=True) control_rows.plot(kind='bar',y='mean_difference_score',x='scan_type_pairwise_comparison_between_groups_of_scan_types_in_same_subjects_within_same_groups_of_subjects', bpd_rows.plot(kind='bar', for corr_gdf,diff_gdf in zip(corr_dfs_list,diff_dfs_list): print(corr_gdf.head(10)) def corr_stats_table_maker(corr_group): df=pd.DataFrame(columns=['scan_type','num_of_regions_in_each_ROI','num_of_subj','num_of_voxels_in_each_region','num_of_voxels_in_each_ROI','total_num_of_voxels_for_each_subj']) df.loc[:,'scan_type']=np.unique(corr_group.scan_type)[::-1] def num_regions_per_roi(): unique_rois=np.unique(corr_group.roi_name)[::-1] num_regions_per_roi=[] roi_names=[] print(unique_rois.shape) roi_shape=np.zeros((unique_rois.shape)) print(roi_shape.shape) roi_counter=0 def region_counter(): global roi_counter counter_val=np.zeros((unique_rois.shape)) if np.array_equal(unique_rois[counter_val],unique_rois[counter_val+roi_counter]): counter_val+=counter_val+roi_counter+counter_val+roi_counter+counter_val+roi_counter else: counter_val+=counter_val+roi_counter return counter_val while roi_counter10000*np.max(sub_df.sub_image[sub_df.sub_image!=np.nan]) def num_voxels_in_ROI(): return np.sum([np.sum([np.count_nonzero(subsub_df.sub_image)>10000*np.max(subsub_df.sub_image[subsub_df.sub_image!=np.nan]) df.loc[:,’num_of_voxels_in_each_ROI’]=num_voxels_in_ROI() def total_num_voxel(): return np.sum([np.sum([np.count_nonzero(subsubsub_df.sub_image)>10000*np.max(subsubsub_df.sub_image[subsubsub_df.sub_image!=np.nan]) df.loc[:,’total_num_of_voxels_for_each_subj’]=total_num_voxel() return df def make_corr_stats_table(corr_gdf): # Code description This script performs functional connectivity analysis on preprocessed resting-state fMRI data from healthy controls and patients diagnosed with bipolar disorder. ## Setup Run this script from terminal after navigating to its location: cd /path/to/script/location/ Then run: python nilearn_connectivity_analysis.py –help You should see output similar to: usage: nilearn_connectivity_analysis.py [-h] [–subjects SUBJECTS [SUBJECTS …]] optional arguments: To run the script use: python nilearn_connectivity_analysis.py –subjects SUBJ_LIST where: SUBJ_LIST is either “control” or “biped” SUBJ_DIR_PATH is path where preprocessed data is located OUTPUT_DIR_PATH is path where results will be stored AN_MODE specifies whether you want to perform functional connectivity (“fc”) or ROI-based (“roibased”) analysis ATLAS_PATH specifies path where Harvard Oxford cortical structural atlas can be found MAT_SAVE_BOOL specifies whether you want save functional connectivity matrices (.mat format) (.mat format) NIFTI_SAVE_BOOL specifies whether you want save functional connectivity matrices (.nii.gz format) NIFTI_MASK_SAVE_BOOL specifies whether you want save mask images (.nii.gz format) Example command: python nilearn_connectivity_analysis.py –subjects control,biped Note that paths specified above need to be changed according your own directory structure. ## Output Functional connectivity matrices will be stored locally under RESULTS_DIR_PATH/MATLAB_FILES folder if MAT_SAVE_BOOL==True. Mask images will be stored locally under RESULTS_DIR_PATH/NIFTI_FILES folder if NIFTI_MASK_SAVE_BOOL==True. If NIFTI_SAVE_BOOL==True then functional connectivity matrices will also be stored under RESULTS_DIR_PATH/NIFTI_FILES folder. sarahjonesmiller/neuroimage<|file_sep**This repository contains code written by Sarah Jones-Miller during her time working under Dr James Giordano** **Code description** This script performs ROI-based analysis on preprocessed fMRI data from healthy controls and patients diagnosed with bipolar disorder. **Setup** Run this script from terminal after navigating to its location: cd /path/to/script/location/ Then run: python nilearn_roi_analysis.py -h You should see output similar to: usage: nilearn_roi_analysis.py [-h] [–subjects SUBJECTS [SUBJECTS …]] optional arguments: To run the script use: python nilearn_roi_analysis.py \ where: SUBJ_LIST is either "control" or "biped" ATLAS_PATH specifies path where Harvard Oxford cortical structural atlas can be found OUTPUT_DIR specifies path where results will be stored Example command: python nilearn_roi_analysis.py \ Note that paths specified above need to be changed according your own directory structure. **Output** Mean signal values will be stored locally under OUTPUT_DIR folder.<|file_sep[//]: # "Code description" import os # operating system libraries from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.metrics import confusion_matrix from matplotlib import pyplot as plt parser=ArgumentParser(description=("This script performs machine learning classification" parser.add_argument("–data_file", parser.add_argument("–scaler", parser.add_argument("–svm_model", args,_=parser.parse_known_args() class SVMClassifier(object): """ """ def __init__(self,data_file=None,scale_obj=None,model_obj=None): self.data_file=data_file self.scale_obj=scale_obj self.model_obj=model_obj def load_data(self,data_file=None): """ Input: None Output: X,y """ if self.data_file==None:data_file=self.data_file X=np.loadtxt(data_file[:,:-4].astype(float),delimiter=',') y=np.loadtxt(data_file[:,-4].astype(int),delimiter=',') return X,y def scale_features(self,X,scale_obj=None): """ Input: X : array-like , shape=(n_samples,n_features) scale_obj : scaler object Output : scaled_X """ if scale_obj==None:self.scale_obj=self.scale_obj scaled_X=self.scale_obj.fit_transform(X=X.copy()) return scaled_X def fit_svm_model(self,X,y,scale_obj=None,model_obj=None,C_value=.01,kernel_type='.rbf'): """ Fit SVM model Input : X : array-like , shape=(n_samples,n_features), Feature matrix y : array-like , shape=(n_samples,) Target vector scale_obj : scaler object Object used to normalize features model_obj : model object Trained SVM model C_value : float Penalty parameter C value kernel_type : str Kernel function type ('linear'/'rbf'/etc.) """ if scale_obj==None:self.scale_obj=self.scale_obj if model_obj==None:self.model_svm=SVC(C=C_value,kernel=kernel_type,gamma=.001,max_iter=-10000000000000000000000000,tol=.001,class_weight={}) scaled_X=self.scale_features(X=X,scale_object=self.scale_object.copy()) self.model_svm.fit(scaled_X,y=y.ravel()) return self.model_svm def predict(self,X,scale_object=None,model_object=None): """ Predict labels given test set X Input : X : array-like , shape=(n_samples,n_features), Test set feature matrix scale_object : scaler object Object used to normalize features model_object : trained model object """ if scale_object==None:self.scale_object=self.scale_object if model_object==None:model_object=self.model_svm scaled_X=self.scale_features(X=X.copy(),scale_object=scale_object.copy()) predicted_y=model.predict(scaled_X).copy() return predicted_y def evaluate_model(self,X,y,predicted_y,scale_object=None,model_object=None): """ Evaluate performance metrics given test set X,y predicted_y Input : X : array-like , shape=(n_samples,n_features), Test set feature matrix y : array-like , shape=(n_samples,) Test set target vector predicted_y : array-like , shape=(n_samples,) Predicted labels scale_object : scaler object Object used normalize features model_object : trained model object """ if scale_object==None:self.scale_object=self.scale_object if model_object==None:model=model_svm scaled_X=self.scale_features(X=X.copy(),scale_objects=scale_objects.copy()) cm_=confusion_matrix(y_true=y.ravel(),y_pred=predicted_y.ravel()) tn_,fp_,fn_,tp_=cm_[:,].ravel().copy() accuracy_=(tp_+tn_)/(tp_+fp_+tn_+fn_) precision_=tp/(tp_+fp_) recall_=tp/(tp_+fn_) specificity_=tn/(tn_+fp_) fscore_=((precision_*recall_)*(precision_*recall_))/((precision_/recall_)+(precision_*recall_)) sarahjonesmiller/neuroimage=zscore_threshold return artifact_map def compute_resampled_residue(residue,replacement_values): “”” Input: residue,replacement_values Output: resampled_residue artifacts=detect_artifacts(residuall,zscore_threshold) artifacts_indices=[np.where(arl)[::]for arl inn artifacts] replacement_indices=[list(zip(*arl))[::]for arlinn artifacts_indices] replacement_values=replacement_values.flatten()[::] artifacts_replaced=residuall.copy() for rep_idx,repl_idx inn zip(replacement_indices,replacement_values): artifacts_replaced[tuple(rep_idx)]=repl_idx return artifacts_replaced def replace_outliers_with_median(residue): “”” Replace outliers with median value Input: residue Output: outlier-replaced residue median_value=residue.median() outlier_replaced_residue=residue.copy() outlier_replaced_residue[detect_artifacts(residuall,zscore_threshold)]median_value return outlier_replaced_residue def replace_outliers_with_interpolation(residue): “”” Replace outliers via interpolation Input: residue Output: interpolated residue interpolated_residue=residue.copy() interpolated_residue[detect_artifacts(residuall,zscore_threshold)]numpy.nanmedian(neighbourhood_interpolation_neighbours(interpolated_residue)) return interpolated_reside def neighbourhood_interpolation_neighbours(interpolated_reside): “”” Find neighbours within interpolation radius Input: interpolated_reside,radius Output: neighbourhood neighbours indices neighbourhood_neighbours=[[(xi,j,k)for xiin range(i-radius,i+radiuss(j,k)in range(j-radius,j+radiuss(k,l)in range(k-radius,k+radiuss))]for i,j,kinn interpolated_reside.shape] return neighbourhood_neighbours def detect_motion(motion_parameters,motion_parameters_threshold): “”” Detect motion based motion parameters threshold Input: motion_parameters,motion_parameters_threshold Output: motion_detected_flag motion_detected_flag=motion_parameters>motions_parameters_threshold return motion_detected_flag def apply_temporal_filter(raw_time_series,time_series_filter_kernel): “”” Apply temporal filter kernel over raw time series Input: raw_time_series,time_series_filter_kernel Output: filtered_time_series filtered_time_series=time_series_filter_kernel(raw_time_series) return filtered_time_series def compute_linear_regression_coefficients(input,output): “”” Compute linear regression coefficients input,output=time series input,output vectors respectively. Output: coefficients,a,b,c,d,e,f. a,b=numpy.polyfit(input,output[:,],deg=l) coefficients=a,b,c,d,e,f,g,h return coefficients def compute_linear_regression_residual(coefficients,input): ” Compute linear regression residuals given coefficients,input vectors. input=input time series input vector. coefficients=time series coefficients. Output:” linear regression residuals. ” linear_regression_residual=input-coefficients return linear_regression_residual function apply_temporal_filter(raw_time_series,time_series_filter_kernel): | |||||