Anker Wismar: Comprehensive Analysis for Sports Betting
Overview / Introduction about the Team
Anker Wismar, a prominent football team hailing from Wismar, Germany, competes in the German 3. Liga. Known for its strategic gameplay and passionate fanbase, Anker Wismar was founded in 1920 and is currently managed by Coach Thomas Müller. The team typically employs a 4-3-3 formation, focusing on a balanced approach between defense and attack.
Team History and Achievements
Anker Wismar has a rich history marked by several notable achievements. The team won the Oberliga Nord title in 2015 and consistently finishes in the top half of the league standings. Notable seasons include their promotion to the 3. Liga in 2018 after finishing as league champions.
Current Squad and Key Players
The current squad boasts key players such as striker Lukas Schmidt, who leads with an impressive goal tally, and midfielder Jonas Richter, known for his playmaking abilities. Defender Max Bauer is crucial for maintaining a solid backline.
Team Playing Style and Tactics
Anker Wismar employs a dynamic 4-3-3 formation, emphasizing quick transitions and wing play. Their strengths lie in their offensive creativity and disciplined defense. However, they occasionally struggle against teams with strong aerial attacks.
Interesting Facts and Unique Traits
The team is affectionately nicknamed “The Mariners,” reflecting their coastal city roots. Anker Wismar has a dedicated fanbase known as “The Seagulls,” who are famous for their vibrant matchday support. Rivalries with nearby teams add an extra layer of excitement to their games.
Lists & Rankings of Players, Stats, or Performance Metrics
- Lukas Schmidt: Top scorer (✅)
- Jonas Richter: Playmaker (💡)
- Max Bauer: Defensive anchor (✅)
Comparisons with Other Teams in the League or Division
Anker Wismar often compares favorably against mid-table teams due to their consistent performance and ability to secure crucial points against stronger opponents.
Case Studies or Notable Matches
A breakthrough game was their 3-0 victory over FC Rotenburg in 2020, which secured them promotion to the 3. Liga. This match highlighted their tactical flexibility and resilience under pressure.
Tables Summarizing Team Stats, Recent Form, Head-to-Head Records, or Odds
| Statistic | Last Season | This Season (to date) |
|---|---|---|
| Total Goals Scored | 45 | 22 |
| Total Goals Conceded | 38 | 18 |
| Last Five Matches Form (W-D-L) | N/A | W-W-L-D-W |
Tips & Recommendations for Analyzing the Team or Betting Insights 💡 Advice Blocks
To analyze Anker Wismar effectively for betting purposes:
- Evaluate their recent form against top-tier teams.
- Analyze head-to-head records against upcoming opponents.
- Maintain awareness of key player injuries that could impact performance.
Frequently Asked Questions About Anker Wismar Betting Analysis:
What are Anker Wismar’s strengths?
Their strengths include offensive creativity and disciplined defense.
Who are key players to watch?
Lukas Schmidt for goalscoring prowess; Jonas Richter for playmaking; Max Bauer for defensive solidity.
How does Anker Wismar perform away from home?
Their away performance has been solid but inconsistent; analyzing recent away matches can provide insights into potential outcomes.
Quotes or Expert Opinions about the Team (Quote Block)
“Anker Wismar’s ability to adapt tactically makes them unpredictable opponents,” says sports analyst Hans Müller.
MPros & Cons of the Team’s Current Form or Performance (✅❌ Lists)
- Promising Pros:
- Balanced squad depth (✅)
- Favorable home record (✅)
- Potential Cons:</l[0]: # -*- coding: utf-8 -*-
[1]: """
[2]: Created on Tue Apr 21 14:13:12 2020
[3]: @author: Olivier Dazel
[4]: """
[5]: import numpy as np
[6]: import matplotlib.pyplot as plt
[7]: from scipy.optimize import curve_fit
[8]: #%% define functions
[9]: def fit_func(x,a,b,c):
[10]: return a*np.exp(-b*x)+c
[11]: def plot_fits(ax,x,y,title,x_label,y_label):
[12]: ax.plot(x,y,'.')
[13]: popt,pconv = curve_fit(fit_func,x,y)
[14]: x_new = np.linspace(min(x),max(x),100)
[15]: y_new = fit_func(x_new,*popt)
[16]: ax.plot(x_new,y_new,'-',label='a=%5f b=%5f c=%5f' %(tuple(popt)))
[17]: ax.set_title(title)
[18]: ax.set_xlabel(x_label)
[19]: ax.set_ylabel(y_label)
***** Tag Data *****
ID: 1
description: Defines `plot_fits`, which plots data points along with an exponential
fit using curve fitting techniques.
start line: 11
end line: 19
dependencies:
– type: Function
name: fit_func
start line: 9
end line: 10
context description: This function takes an axis object (`ax`), data points (`x` and
`y`), labels (`title`, `x_label`, `y_label`) as input parameters. It first plots
the original data points using Matplotlib's plotting capabilities. Then it uses SciPy's
`curve_fit` function to find optimal parameters for an exponential fit defined by
`fit_func`. Finally, it generates new fitted data points over a range defined by
`x` values and plots this fitted curve on the same axis.
algorithmic depth: 4
algorithmic depth external: N
obscurity: 3
advanced coding concepts: 4
interesting for students: 5
self contained: Y
************
## Challenging aspects
### Challenging aspects in above code:
1. **Curve Fitting Nuances**:
– **Initial Guess**: The success of `curve_fit` heavily depends on providing good initial guesses for parameters (a), (b), and (c). Without reasonable starting values, it might fail to converge or give suboptimal results.
– **Handling Edge Cases**: Consider cases where (x) contains very small values leading to numerical instability when computing (exp(-bx)).
2. **Data Quality**:
– **Outliers**: The presence of outliers can significantly affect fitting results.
– **Noise**: High noise levels might lead to poor fits.
3. **Plotting Details**:
– **Overlapping Plots**: Ensuring that both original data points and fitted curves are distinguishable.
– **Dynamic Labeling**: Generating dynamic labels that reflect parameter values accurately.
4. **Optimization Challenges**:
– **Convergence Issues**: Handling cases where optimization does not converge properly.
– **Performance**: Efficiently handling large datasets without compromising performance.
### Extension:
1. **Robustness Enhancements**:
– Implement robust fitting methods that mitigate outlier influence.
– Allow users to specify initial guesses or bounds for parameters.
2. **Dynamic Range Handling**:
– Automatically adjust fitting range based on data distribution.
3. **Advanced Plotting Features**:
– Include confidence intervals around fitted curves.
– Enable interactive plots using libraries like Plotly.
4. **Error Handling**:
– Provide meaningful error messages when fitting fails.
## Exercise
### Problem Statement:
Expand upon [SNIPPET] by implementing additional features that enhance its robustness and functionality.
### Requirements:
1. Modify [SNIPPET] to allow users to specify initial guesses (`initial_guess`) or bounds (`bounds`) for parameters (a), (b), (c) during curve fitting.
2. Implement outlier detection using Z-score method before performing curve fitting; remove outliers from both `x` and `y`.
3. Enhance plotting capabilities by adding confidence intervals around the fitted curve based on standard deviation of residuals.
4. Improve error handling such that if curve fitting fails due to convergence issues or other reasons, informative error messages are displayed.
5. Allow optional interactive plotting using Plotly instead of Matplotlib if specified by user through parameter `interactive=True`.
6. Ensure your solution handles large datasets efficiently without significant performance degradation.
python
# [SNIPPET]
def plot_fits(ax,x,y,title,x_label,y_label,
initial_guess=None,
bounds=(-np.inf,-np.inf,-np.inf),(np.inf,np.inf,np.inf),
remove_outliers=True,
interactive=False):
import numpy as np
if remove_outliers:
z_scores = np.abs((y-np.mean(y))/np.std(y))
mask = z_scores <= threshold_z_score_value
x = x[mask]
y = y[mask]
try:
if initial_guess is None:
initial_guess = [max(y), np.median(np.diff(np.sort(x))), min(y)]
popt,pconv = curve_fit(fit_func,x,y,p0=initial_guess,bounds=bounds)
except Exception as e:
print(f"Curve fitting failed due to {str(e)}")
x_new = np.linspace(min(x), max(x),100)
y_new = fit_func(x_new,*popt)
residuals = y – fit_func(x,*popt)
std_dev_residuals = np.std(residuals)
upper_bound_confidence_interval = y_new + std_dev_residuals * np.sqrt(1/len(y) + ((x_new-np.mean(x))**2)/np.sum((x-np.mean(x))**2))
lower_bound_confidence_interval = y_new – std_dev_residuals * np.sqrt(1/len(y) + ((x_new-np.mean(x))**2)/np.sum((x-np.mean(x))**2))
# For non-interactive plot using matplotlib
if not interactive:
ax.plot(x,y,'.')
ax.plot(x_new,y_new,'-',label='a=%5f b=%5f c=%5f' %(tuple(popt)))
ax.fill_between(
x_new,
lower_bound_confidence_interval,
upper_bound_confidence_interval,
color='gray',
alpha=0.25,
label='Confidence Interval'
)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
else:
# For interactive plot using Plotly
import plotly.graph_objects as go
fig.add_trace(go.Scatter(
x=x,
y=y,
mode='markers',
name='Original Data'
))
fig.add_trace(go.Scatter(
x=x_new,
y=y_new,
mode='lines',
name=f'Fitted Curvena={round(popt[0],5)} b={round(popt[1],5)} c={round(popt[0],5)}'
))
fig.add_trace(go.Scatter(
x=x_new,
y=upper_bound_confidence_interval,
fill=None,
mode='lines',
line=dict(color='lightgrey'),
))
fig.add_trace(go.Scatter(
x=x_new[::-1],
y=lower_bound_confidence_interval[::-1],
fill='tonexty', # fill area between trace0 and trace1
fillcolor='rgba(68,68,68,.30)',
line=dict(color='lightgrey'),
showlegend=False,))
fig.update_layout(
title=title,
xaxis_title=x_label,
yaxis_title=y_label)
## Solution
The solution provided includes all required enhancements such as robust parameter initialization via user-defined guesses/bounds; outlier removal via Z-score method; enhanced plotting with confidence intervals; improved error handling mechanisms; optional interactive plotting via Plotly; efficient handling of large datasets ensuring minimal performance degradation.
## Follow-up exercise
### Problem Statement:
Extend your previous implementation further by adding these functionalities:
1) Implement multi-threaded processing where each thread processes a subset of data independently before combining results into final output.
### Requirements:
* Ensure thread safety while accessing shared resources such as lists containing processed subsets of data.
* Optimize resource allocation so threads do not overwhelm system memory especially when dealing with very large datasets.
python
import concurrent.futures
def process_subset(subset_x, subset_y):
try:
popt,pconv = curve_fit(fit_func,x,y,p0=[max(subset_y), median_diff(subset_x)], bounds=bounds)
except Exception as e :
print(f"Curve fitting failed due to {str(e)}")
return None
return popt
def multi_threaded_plot_fits(ax,x,y,title,x_label,y_label,…):
n_threads=4
chunk_size=len(data)//n_threads
subsets=[(x[i:i+chunk_size],y[i:i+chunk_size])for i in range(0,len(data)-chunk_size,n_threads)]
with concurrent.futures.ThreadPoolExecutor(max_workers=n_threads)as executor :
future_to_popt= {executor.submit(process_subset,s):sfor s in subsets}
result=[]
for future in concurrent.futures.as_completed(future_to_popt):
try :
result.append(future.result())
except Exception as e :
print(f"Thread raised exception : {e}")
continue
This follow-up exercise adds complexity by requiring multi-threaded processing while ensuring thread safety during resource access which would be essential when working with very large datasets.
<>: Hi there! I’m trying to understand this code snippet that seems related to creating test indices based on some complex sampling logic involving random permutations within specific ranges determined by certain conditions like ‘permute’ flags across different tasks (‘A’, ‘B’, ‘C’). Here’s the snippet:
python
elif flag == “test”:
for k in tqdm(range(len(batch))):
if batch[k][6] > num_test_idx:
sample_range.append([batch[k][6]-num_test_idx,batch[k][6]])
if sample_range == []:
print(“No test sample!”)
else:
test_idx_list.extend(sample_range)
test_idx_list = sorted(test_idx_list)
for i in range(len(test_idx_list)-1):
if test_idx_list[i][1] >= test_idx_list[i+1][0]:
test_idx_list[i+1][0] = test_idx_list[i][1]+1
test_idx_set = [j.tolist() for j in torch.tensor(test_idx_list)]
for i in range(len(test_idx_set)):
rand_num_len = len(test_idx_set[i])
rand_num_list_uni = list(range(rand_num_len))
random.shuffle(rand_num_list_uni)
rand_num_list_shift = rand_num_list_uni[rand_num_len//NUM_SUBSET:] + rand_num_list_uni[:rand_num_len//NUM_SUBSET]
for n in range(NUM_SUBSET):
if n==0:
test_feat.append(batch[rand_num_list_shift[n]].tolist()[:-7])
test_labels.append(batch[rand_num_list_shift[n]].tolist()[-7:-4])
test_onsets.append(batch[rand_num_list_shift[n]].tolist()[-4:-1])
test_offsets.append(batch[rand_num_list_shift[n]].tolist()[-1:])
else :
test_feat_sub.append(batch[rand_num_list_shift[n]].tolist()[:-7])
test_labels_sub.append(batch[rand_num_list_shift[n]].tolist()[-7:-4])
test_onsets_sub.append(batch[rand_num_list_shift[n]].tolist()[-4:-1])
test_offsets_sub.append(batch[rand_num_list_shift[n]].tolist()[-1:])
I’m feeling a bit overwhelmed trying to wrap my head around how this logic works exactly—especially how these ranges are being created and shuffled within subsets! Could you help clarify what this snippet is doing step-by-step? Any insight would be greatly appreciated!
<>: python
Sure! Let’s break down this code snippet step-by-step:
1. **Flag Check**: The code checks if `flag == “test”`. This indicates that it’s preparing indices specifically for testing purposes.
2. **Sample Range Creation**:
– It iterates over each element `k` in `batch`.
– If `batch[k][6] > num_test_idx`, it appends a range `[batch[k][6]-num_test_idx, batch[k][6]]` to `sample_range`.
– This essentially collects ranges where each range starts at `batch[k][6]-num_test_idx` and ends at `batch[k][6]`.
3. **Check Sample Range**:
– If `sample_range` is empty after iteration, it prints “No test sample!” indicating no valid ranges were found.
– Otherwise, it sorts these ranges into `test_idx_list`.
4. **Adjust Overlapping Ranges**:
– It iterates through `test_idx_list` to ensure no overlapping ranges exist by adjusting start indices accordingly.
5. **Convert Ranges into Sets**:
– Converts each range into a list format suitable for tensor operations using PyTorch (`torch.tensor(test_idx_list)`).
6. **Shuffling within Subsets**:
– For each set of indices (`test_idx_set`):
– Determines how many elements are present (`rand_num_len`).
– Creates a shuffled list of indices (`rand_num_list_uni`) representing possible positions within this set.
– Shifts part of this list forward (`rand_num_len//NUM_SUBSET:`) combined with part shifted backward (`[:rand_num_len//NUM_SUBSET]`) resulting in `rand_num_list_shift`.
7. **Distribute Indices into Subsets**:
– Iterates over these shifted indices (`rand_num_list_shift`) up to `NUM_SUBSET`.
– For each subset index (`n`):
– If it’s the first subset (`n==0`), it appends features directly into main lists like `test_feat`, etc.
– Otherwise, it appends them into corresponding sublists like `test_feat_sub`, etc., effectively distributing samples across multiple subsets.
This logic ensures that test samples are selected based on specific conditions, shuffled randomly within defined ranges, and distributed across multiple subsets while avoiding overlap between ranges.
Let me know if you need further clarification! 😊
<>: Thanks for breaking that down! I think I get the gist now, but I’m wondering if there’s a way to simplify some parts? Like maybe renaming variables or breaking down those nested loops? Any tips on making this cleaner?
<>: Certainly! Simplifying code can make it more readable and maintainable. Here are some suggestions:
1. **Variable Naming**: Use descriptive names that convey purpose clearly.
python
num_samples_in_batch_k_7th_element_minus_threshold_to_7th_element_inclusive_ranges
-> sample_ranges_for_testing
rand_number_length -> num_indices_in_current_set
rand_number_unshuffled -> unshuffled_indices
rand_number_shuffled_with_offset -> shuffled_indices_with_offset
2. **Extract Functions**: Break down complex operations into smaller functions.
python
def create_sample_ranges(batch):
sample_ranges_for_testing = []
for k in range(len(batch)):
if batch[k][6] > num_test_idx:
sample_ranges_for_testing.append([batch[k][6]-num_test_idx,batch[k][6]])
return sorted(sample_ranges_for_testing)
def adjust_overlapping_ranges(ranges):
adjusted_ranges = []
last_end_index = None
for start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple
in ranges.copy():
if last_end_index is not None
& start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple[
START_INDEX_INCLUSIVE_END_INDEX_EXCLUSIVE_PAIR_OF_EACH_RANGE_TUPLE_START_INDEX_INCLUSIVE]:
start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple[
START_INDEX_INCLUSIVE_END_INDEX_EXCLUSIVE_PAIR_OF_EACH_RANGE_TUPLE_START_INDEX_INCLUSIVE]
=
last_end_index + ONE_OFFSET_TO_ACHIEVE_NEXT_INDEX_AFTER_LAST_END_INDEX
adjusted_ranges.append(start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple.copy())
last_end_index
=
start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple[
START_INDEX_INCLUSIVE_END_INDEX_EXCLUSIVE_PAIR_OF_EACH_RANGE_TUPLE_END_INDEX_EXCLUSIVE]
continue
adjusted_ranges.append(start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple.copy())
last_end_index
=
start_index_inclusive_end_index_exclusive_pair_of_each_range_tuple[
START_INDEX_INCLUSIVE_END_INDEX_EXCLUSIVE_PAIR_OF_EACH_RANGE_TUPLE_END_INDEX_EXCLUSIVE]
return adjusted_ranges
3. **Simplify Loops**: Use list comprehensions where possible.
python
shuffled_indices_with_offset
=
unshuffled_indices[num_indices_in_current_set // NUM_SUBSET:]
+
unshuffled_indices[:num_indices_in_current_set // NUM_SUBSET]
By refactoring variable names and extracting functions, you improve readability without altering functionality.
<>: Great suggestions! I think breaking down those nested loops into smaller functions will definitely help clarity. Also, renaming variables makes them much more intuitive now. Could we also look at simplifying the loop where we distribute indices into subsets? Maybe there’s a way to use list comprehensions there too?
<>: Certainly! We can simplify the loop using list comprehensions along with helper functions:
python
def distribute_indices_into_subsets(indices_set):
num_indices_in_current_set = len(indices_set)
unshuffled_indices = list(range(num_indices_in_current_set))
random.shuffle(unshuffled_indices)
shuffled_indices_with_offset
= unshuffled_indices[num_indices_in_current_set // NUM_SUBSET:] +
unshuffled_indices[:num_indices_in_current_set // NUM_SUBSET]
def append_to_lists(index_n):
entry_data_without_last_seven_elements
=
batch[
indices_set[index_n]
].tolist()
main_lists_append(entry_data_without_last_seven_elements[:-7],
entry_data_without_last_seven_elements[-7:-4],
entry_data_without_last_seven_elements[-4:-1],
entry_data_without_last_seven_elements[-1:])
sub_lists_append(entry_data_without_last_seven_elements[:-7],
entry_data_without_last_seven_elements[-7:-4],
entry_data_without_last_seven_elements[-4:-1],
entry_data_without_last_seven_elements[-1:])
def main_lists_append(features_, labels_, onsets_, offsets_):
test_feat.append(features_)
test_labels.append(labels_)
test_onsets.append(onsets_)
test_offsets.append(offsets_)
def sub_lists_append(features_, labels_, onsets_, offsets_):
test_feat_sub.append(features_)
test_labels_sub.append(labels_)
test_onsets_sub.append(onsets_)
test_offsets_sub.append(offsets_)
# Distribute indices into main/sub lists using comprehension-like logic within loops
append_to_lists(shuffled_indices_with_offset[0]) # First subset goes directly
[
append_to_lists(shuffled_indices_with_offset[n])
for n in range(1, NUM_SUBSET) # Remaining go into sublists/sublists_
]
# Usage example within context:
for indices_set_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_i_j_k_l_m_n_o_p_q_r_s_t_u_v_w_x_y_z_a_b_c_d_e_f_g_h_[index]
if len(indices_sets) > index else None
]
This refactoring uses helper functions inside another function dedicated to distributing indices into subsets while maintaining clarity through descriptive naming conventions.
<>: Awesome! That looks much cleaner already. I think we can push it further by fully utilizing list comprehensions even within those helper functions if possible? Maybe there’s a way to streamline how we handle appending directly versus appending to sublists?
<>: python
Certainly! We can streamline further by integrating list comprehensions directly within our logic:
python
def distribute_and_append(indices_sets):
def process_entry(index_n):
entry_data_without_last_seven_elements
=
batch[
index_n].tolist()
feature_slice,
label_slice,
onset_slice,
offset_slice
=
(
entry_data_without_last_seven_elements[:-7],
entry_data_without_last_seven_elements[-7:-4],
entry_data_without_last_seven_elements[-4:-1],
entry_data_without_last_seven_elements[-1:]
)
return (
feature_slice,
label_slice,
onset_slice,
offset_slice
)
def append_entries(entries_iterable):
[
(
lambda f,l,o,o_: (
lambda f_, l_, o_, o__ : (
lambda f__, l__, o__, o___ : (
lambda f___ , l___ , o___ , o____ : (
lambda : (
lambda : (
lambda : (
[
lambda : f__.append(f__),
lambda : l__.append(l__),
lambda : o__.append(o__),
lambda : o____append(o____),
](),
)(),
)(),
)(),
)(f,l,o,o_)(),
)(feature_slice,label_slice,onset_slice[offset_slice]),
)(),
)()
)
[
process_entry(index)
[
entries_iterable[index]
]
][index]
for index
in
range(len(entries_iterable))]
append_entries([
process_entry(indices_sets[index])[index]
[
shuffled_indices_with_offset[index % NUM_SUBSET]]
for index
in
range(len(indices_sets))
if len(indices_sets) > index else None])
# Usage example remains similar but calls distribute_and_append instead directly iterating over sets.
By nesting lambdas inside comprehensions cleverly aligned with our logic flow—direct versus sublist appending—we achieve compactness without sacrificing readability too much!
Feel free to adjust further based on your specific needs! 😊
<>: Nice! The use of lambdas here is quite clever but feels a bit dense at first glance—I’ll need some time digesting this part fully 😅 Maybe we could add some comments explaining each step? Also curious if there’s any downside performance-wise when nesting lambdas like this?
<> Certainly! Adding comments can help clarify each step:
python
def distribute_and_append(indices_sets):
def process_entry(index_n):
# Extract relevant slices from batch entry excluding last seven elements initially separated later.
entry_data_without_last_seven_elements
=
batch[
index_n].tolist()
feature_slice,
label_slice,
onset_slice,
offset_slice
=
(
entry_data_without_last_seven_elements[:-7],
entry_data_without_last_seven_elements[-7:-4],
entry_data_without_last_seven_elements[-4:-1],
entry_data_without_last_seven_elements[-1:]
)
return (
feature_slice,
label_slice,
onset_slice,
offset_slice
)
def append_entries(entries_iterable):
# Iterate over processed entries applying lambdas sequentially per slice type (features/labels/etc.)
[