mokka.equalizers.adaptive
Module implementing mapping and demapping.
PyTorch implementations of adaptive equalizers.
- class mokka.equalizers.adaptive.torch.AEQ_SP(R, sps, lr, taps=None, filter_length=31, block_size=1, no_singularity=False)
Bases:
Module
Class to perform CMA equalization for a single polarization signal.
- __init__(R, sps, lr, taps=None, filter_length=31, block_size=1, no_singularity=False)
Initialize
AEQ_SP
.- Parameters:
R – average radio/kurtosis of constellation
sps – samples per symbol
lr – learning rate of adaptive update algorithm
taps – Initial equalizer taps
filter_length – length of equalizer if not provided as object
block_size – number of symbols per update step
no_singularity – Not used for the single polarization case
- forward(y)
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_error_signal()
- reset()
- class mokka.equalizers.adaptive.torch.CMA(R, sps, lr, butterfly_filter=None, filter_length=31, block_size=1, no_singularity=False, singularity_length=3000)
Bases:
Module
Class to perform CMA equalization.
- __init__(R, sps, lr, butterfly_filter=None, filter_length=31, block_size=1, no_singularity=False, singularity_length=3000)
Initialize
CMA
.- Parameters:
R – constant modulus radius
sps – samples per symbol
lr – learning rate
butterfly_filter – Optional
mokka.equalizers.torch.Butterfly2x2
objectfilter_length – butterfly filter length (if object is not given)
block_size – Number of symbols to process before updating the equalizer taps
no_singularity – Initialize the x- and y-polarization to avoid singularity by decoding the signal from the same polarization twice.
singularity_length – Delay for initialization with the no_singularity approach
- butterfly_filter: Butterfly2x2
- forward(y)
Perform CMA equalization on input signal y.
- Parameters:
y – input signal y
- get_error_signal()
Return error signal used to adapt equalizer taps.
- reset()
Reset equalizer and butterfly filters.
- mokka.equalizers.adaptive.torch.ELBO_DP(y, q, sps, constellation_symbols, butterfly_filter, p_constellation=None, IQ_separate=False)
Calculate dual-pol. ELBO loss for arbitrary complex constellations.
Instead of splitting into in-phase and quadrature we can just the whole thing. This implements the dual-polarization case.
- class mokka.equalizers.adaptive.torch.PilotAEQ_DP(sps, lr, pilot_sequence, pilot_sequence_up, butterfly_filter=None, filter_length=31, method='LMS', block_size=1, adaptive_lr=False, adaptive_scale=0.1, preeq_method=None, preeq_offset=3000, preeq_lradjust=1.0, lmszf_weight=0.5)
Bases:
Module
Perform pilot-based adaptive equalization.
This class performs equalization on a dual polarization signal with a known dual polarization pilot sequence. The equalization is performed either with the LMS method, ZF method or a novel LMSZF method which combines the regression vectors of LMS and ZF to improve stability and channel estimation properties.
- __init__(sps, lr, pilot_sequence, pilot_sequence_up, butterfly_filter=None, filter_length=31, method='LMS', block_size=1, adaptive_lr=False, adaptive_scale=0.1, preeq_method=None, preeq_offset=3000, preeq_lradjust=1.0, lmszf_weight=0.5)
Initialize
PilotAEQ_DP
.- Parameters:
sps – samples per symbol
lr – learning rate to update adaptive equalizer taps
pilot_sequence – Known dual polarization pilot sequence
pilot_sequence_up – Upsampled dual polarization pilot sequence
butterfly_filter –
mokka.equalizers.torch.Butterfly2x2
objectfilter_length – If a butterfly_filter argument is not provided the filter length to initialize the butterfly filter.
method – adaptive update method for the equalizer filter taps
block_size – number of symbols to process before each update step
adaptive_lr – Adapt learning rate during simulation
preeq_method – Use a different method to perform a first-stage equalization
preeq_offset – Length of first-stage equalization
preeq_lradjust – Change learning rate by this factor for first-stage equalization
lmszf_weight – if LMSZF is used as equalization method the weight between ZF and LMS update algorithms.
- forward(y)
Equalize input signal y
- Parameters:
y – Complex receive signal y
- reset()
Reset
PilotAEQ_DP
object.
- class mokka.equalizers.adaptive.torch.PilotAEQ_SP(sps, lr, pilot_sequence, pilot_sequence_up, filter_length=31, method='LMS')
Bases:
Module
Perform pilot-based adaptive equalization (QPSK)
- __init__(sps, lr, pilot_sequence, pilot_sequence_up, filter_length=31, method='LMS')
Initialize
PilotAEQ_SP
.Pilot-based adaptive equalizer for the single polarization case.
- Parameters:
sps – samples per symbol
lr – learning rate of adaptive equalizer update
pilot_sequence – known transmit pilot sequence
pilot_sequence_up – upsampled known transmit pilot sequence
- forward(y)
Equalize a single polarization signal.
- Parameters:
y – complex single polarization received signal
- reset()
Reset
PilotAEQ_SP
- class mokka.equalizers.adaptive.torch.VAE_LE_DP(num_taps_forward, num_taps_backward, demapper, sps, block_size=200, lr=0.005, requires_q=False, IQ_separate=False, var_from_estimate=False, num_block_train=None)
Bases:
Module
Adaptive Equalizer based on the variational autoencoder principle with a linear equalizer.
This code is based on the work presented in [1].
[1] V. Lauinger, F. Buchali, and L. Schmalen, ‘Blind equalization and channel estimation in coherent optical communications using variational autoencoders’, IEEE Journal on Selected Areas in Communications, vol. 40, no. 9, pp. 2529–2539, Sep. 2022, doi: 10.1109/JSAC.2022.3191346.
- __init__(num_taps_forward, num_taps_backward, demapper, sps, block_size=200, lr=0.005, requires_q=False, IQ_separate=False, var_from_estimate=False, num_block_train=None)
Initialize
VAE_LE_DP
.This VAE equalizer is implemented with a butterfly linear equalizer in the forward path and a butterfly linear equalizer in the backward pass. Therefore, it is limited to correct impairments of linear channels.
- Parameters:
num_taps_forward – number of equalizer taps
num_taps_backward – number of channel taps
demapper – mokka demapper object to perform complex symbol demapping
sps – samples per symbol
block_size – number of symbols per block - defines the update rate of the equalizer
lr – learning rate for the adam algorithm
requires_q – return q-values in forward call
IQ_separate – process I and Q separately - requires a demapper which performs demapping on real values and a bit-mapping which is equal on I and Q.
var_from_estimate – Update the variance in the demapper from the SNR estimate of the output
num_block_train – Number of blocks to train the equalizer before switching to non-training equalization mode (for static channels only)
- forward(y)
Peform equalization of input signal y.
- Parameters:
y – Complex input signal
- update_lr(new_lr)
Update learning rate of VAE equalizer.
- Parameters:
new_lr – new value of learning rate to be set
- update_var(new_var)
Update variance of demapper.
- Parameters:
new_var – new value of variance to be set
- mokka.equalizers.adaptive.torch.update_adaptive(y_hat_sym, pilot_seq, regression_seq, idx, length, sps)
Calculate update signal to be used to update adaptive equalizer taps.
- Parameters:
y_hat_sym – Estimated receive symbol sequence
pilot_seq – Known pilot symbol sequence
regression_seq – Regression sequence to be applied to the equalizer taps
idx – symbol index to use for calculation of the error signal
length – Not used in this function, just for API compatibility reasons
sps – Not used in this function, just for API compatibility reasons