I Introduction
The increasing availability of largescale, finegrained datasets provides an unprecedented opportunity for quantitative studies of complex systems. Nonetheless, a shift towards datadriven modelling of these systems requires efficient algorithms for analysing multivariate time series, which are obtained from observation of the activity of a large number of elements.
In the field of neuroscience, the multivariate time series typically obtained from brain recordings serve to infer minimal (effective) network models which can explain the dynamics of the nodes in a neural system. The motivation for such models can be, for instance, to describe a causal network (Friston, 1994; Ay and Polani, 2008) or to model the directed information flow in the system (Vicente et al., 2011) in order to produce a minimal computationally equivalent network (Lizier and Rubinov, 2012).
Information theory (Shannon, 1948; Cover and Thomas, 2005) is well suited for the latter motivation of inferring networks that describe information flow as it provides modelfree measures that can be applied at different scales and to different types of recordings. These measures, including conditional mutual information (Cover and Thomas, 2005) and transfer entropy (Schreiber, 2000)
, are based purely on probability distributions and are able to identify nonlinear relationships
(Paluš et al., 1993). Most importantly, informationtheoretic measures allow the interpretation of the results from a distributed computation or information processing perspective, by modelling the information storage, transfer, and modification within the system (Lizier, 2013). Therefore, information theory simultaneously provides the tools for building the network model and the mathematical framework for its interpretation.The general approach to network model construction can be outlined as follows: for any target process (element) in the system, the inference algorithm selects the minimal set of processes that collectively contribute to the computation of the target’s next state. Every process can be separately studied as a target and the results can be combined into a directed network describing the information flows in the system.
This task presents several challenges:

The state space of the possible network models grows faster than exponentially with respect to the size of the network;

Informationtheoretic estimators suffer from the “curse of dimensionality” for large sets of variables
(Roulston, 1999; Paninski, 2003); 
In a network setting, statistical significance testing requires multiple comparisons. This results in a high false positive rate (type I errors) without adequate familywise error rate controls
(Dickhaus, 2014)or a high false negative rate (type II errors) with naive control procedures;
Several previous studies (Vlachos and Kugiumtzis, 2010; Faes et al., 2011; Lizier and Rubinov, 2012; Sun et al., 2015) proposed greedy algorithms to tackle the first two challenges outlined above (see a summary by Bossomaier et al. (2016, sec 7.2)
). These algorithms mitigate the curse of dimensionality by greedily selecting the random variables that iteratively reduce the uncertainty about the present state of the target. The reduction of uncertainty is rigorously quantified by the informationtheoretic measure of conditional mutual information (CMI), which can also be interpreted as a measure of conditional independence
(Cover and Thomas, 2005). In particular, these previous studies employed multivariate forms of the transfer entropy (i.e., conditional and collective forms (Lizier et al., 2008, 2010)). In general, such greedy optimisation algorithms provide a locally optimal solution to the NPhard problem of selecting the most informative set of random variables. An alternative optimisation strategy—also based on conditional independence—employs a preliminary step to prune the set of sources (Runge et al., 2012, 2018). Despite this progress, the computational challenges posed by the estimation of multivariate transfer entropy have severely limited the size of problems investigated in previous validation studies in the general case of nonlinear estimators, e.g., Montalto et al. (2014) used nodes and samples; Kim et al. (2016) used nodes and samples; Runge et al. (2018) used nodes and samples. However, modern neural recordings often provide hundreds of nodes and tens of thousands of samples.These computational challenges, as well as the multiple testing challenges described above, are addressed here by the implementation of rigorous statistical tests, which represent the main theoretical contribution of this paper. These tests are used to control the familywise error rate and are compatible with parallel processing, allowing the simultaneous analysis of the targets. This is a crucial feature, which enabled an improvement on the previous greedy algorithms: exploiting the parallel computing capabilities of highperformance computing clusters and graphics processing units (GPUs) enabled the analysis of networks at a relevant scale for brain recordings—up to nodes and samples. Our algorithm has been implemented in the recently released IDTxl Python package (the “Information Dynamics Toolkit xl” (Wollstadt et al., 2019)).^{1}^{1}1The “Information Dynamics Toolkit xl” is an opensource Python package available on GitHub (github.com/pwollstadt/IDTxl). In this paper, we refer to the current release at the time of writing (v1.0).
We validated our method on synthetic datasets involving random structural networks of increasing size (also referred to as ground truth
) and different types of dynamics (vector autoregressive processes and coupled logistic maps). In general, effective networks are able to reflect dynamic changes in the regime of the system and do not reflect an underlying structural network. Nonetheless, in the absence of hidden nodes (and other assumptions, including stationarity and the causal Markov condition), the inferred information network was proven to reflect the underlying structure for a sufficiently large sample size
(Sun et al., 2015). Experiments under these conditions provide arguably the most important validation that the algorithm performs as expected, and here we perform the first largescale empirical validation for nonGaussian variables. As shown in the Results, the performance of our algorithm increased with the length of the time series, reaching consistently high precision, recall, and specificity ( on average) for time samples. Varying the statistical significance threshold showed a more favourable precisionrecall tradeoff for longer time series.Ii Methods
ii.1 Definitions and assumptions
Let us consider a system of discretetime stochastic processes for which a finite number of samples have been recorded (over time and/or in different replications of the same experiment). In general, let us assume that the stochastic processes are stationary in each experimental timewindow and Markovian with finite memory .^{2}^{2}2The present state of the target does not depend on the past values of the target and the sources beyond a maximum finite lag . Further assumptions will be made for the validation study.
The following quantities are needed for the setup and formal treatment of the algorithm and are visualised in Figure 1 and Figure 2:
 target process

: a process of interest within the system (where ); the choice of the target process is arbitrary and all the processes in the system can separately be studied as targets;
 source processes

: the remaining processes within the system (where and );
 sample number (or size)

: the number of samples recorded over time;
 replication number

: the number of replications of the same experiment (e.g., trials);
 target present state

: the random variable (RV) representing the state of the target at time (where ), whose information contributors will be inferred;
 candidate target past

: an arbitrary finite set of RVs in the past of the target, up to a maximum lag , i.e., ;
 candidate sources past

: an arbitrary finite set of RVs in the past of the sources, up to a maximum lag , i.e., ;
 selected target past

: the subset of RVs within the candidate target past set that maximally reduces the uncertainty about the present state of the target;
 selected sources past

: the subset of RVs within the candidate sources past set that maximally further reduces the uncertainty about the present state of the target, in the context of the selected target past (explained in detail in the following section).
ii.2 Inference algorithm
For a given target process , the goal of the algorithm is to infer the minimal set of information contributors to —defined as the selected sources past —in the context of the relevant information contributors from the candidate target past set, defined as the selected target past .
The algorithm operates in four steps:

Select variables in the candidate target past set to obtain

Select variables in the candidate sources past set to obtain

Prune the selected sources past variables

Test relevant variables collectively for statistical significance
The operations performed in the four steps are described in detail hereafter; the result is a nonuniform embedding of the target and sources time series, as illustrated in Figure 2.^{3}^{3}3The term embedding refers to the property of the selected set in capturing the underlying state of the process as it relates to the target’s next value, akin to a Takens’ embedding (Takens, 1981) yet with nonuniform delays between selected points (Vlachos and Kugiumtzis, 2010; Faes et al., 2011).
ii.2.1 Step 1: Select variables in the candidate target past set
The goal of the first step is to find the subset of RVs within the candidate target past set that maximally reduces the uncertainty about the present state of the target while meeting statistical significance requirements. Let be the selected target past set found via optimisation under these criteria.
Finding the globally optimal embedding is an NPhard problem and requires testing all the subsets of the candidate target past set. Since the number of subsets grows exponentially with the size of the candidate set, this is computationally unfeasible; therefore, a greedy approximation algorithm is employed to find a locally optimal solution in the search space of possible embeddings. This approach tackles the challenge of computational complexity by aiming at identifying a minimal conditioning set; in doing so, it also tackles the curse of dimensionality in the estimation of informationtheoretic functionals.
The set is initialised as an empty set and it is iteratively built up via the following algorithm:

[label=]

For each candidate variable , estimate the CMI contribution ;

Find the candidate
which maximises the CMI contribution (reduction of uncertainty) and perform a statistical significance test against the null hypothesis of conditional independence, i.e., that the new variable does not further reduce the uncertainty in the context of the previously included variables. If significant, add
to and remove it from . The maximum statistic is employed to control the familywise error rate (explained in detail in the Statistical tests section); 
Repeat the previous steps until the maximum CMI contribution is not significant or is empty.
From a distributed, intrinsic computation perspective, the goal can be interpreted as finding the embedding of the target’s past states that maximises the active information storage^{4}^{4}4The active information storage is defined as the mutual information between the past and the present of the target: . (Lizier et al., 2012) to ensure selfprediction optimality as suggested by Wibral et al. (2013), and similar to Garland et al. (2016), but also with nonuniform embedding and additional statistical controls.
The nonuniform embedding of the time series was introduced by Vlachos and Kugiumtzis (2010) and Faes et al. (2011), who used an arbitrary threshold for the conditional mutual information. Lizier and Rubinov (2012) introduced a statistical significance test to select the candidates, which this study builds upon in proposing the maximum statistic. In addition, they embedded the target time series before embedding the sources, i.e., the active information storage is modelled first and the information transfer is then examined in that context, thereby taking a specific modelling perspective on the information processing carried out by the system.
ii.2.2 Step 2: Select variables in the candidate sources past set
The goal of the second step is to find the subset of RVs within the candidate sources past set that maximally further reduces the uncertainty about the present state of the target, in the context of the selected target past found in the first step. Let be the selected sources past set found via optimisation under these criteria.
As for step , a greedy approximation algorithm is employed and the statistical significance is tested throughout the selection process. is initialised as an empty set and it is iteratively built up via the following algorithm:

[label=]

For each candidate variable , estimate the conditional transfer entropy contribution (Verdes, 2005; Lizier et al., 2008, 2010; Vakorin et al., 2009). When is empty, this is simply a pairwise or bivariate transfer entropy (Schreiber, 2000); using the conditional form while building serves to eliminate candidates carrying only redundant information (due to, e.g., common driver or pathway effects) from being selected, as well as to capture synergistic interactions with .

Find the candidate which maximises the conditional transfer entropy contribution (reduction of uncertainty) and perform a statistical significance test against the null hypothesis of conditional independence: if significant, add to and remove it from . The maximum statistic is employed to control the familywise error rate;

Repeat the previous steps until the maximum conditional transfer entropy contribution is not significant or is empty.
From a distributed computation perspective, the goal can be interpreted as finding the nonuniform embedding of the source processes’ past that maximises the collective transfer entropy to the target, defined as (Lizier et al., 2010). As above, the rationale for embedding the past of the sources as a second step is to achieve optimal separation of the storage and transfer contributions (Lizier and Rubinov, 2012).
ii.2.3 Step 3: Prune the selected sources past variables
The third step of the algorithm is a pruning procedure performed to ensure that the variables included in the early iterations of the second step still provide a statisticallysignificant information contribution in the context of the final selected sources past set . The pruning step involves the following operations:

[label=]

For each variable , estimate the conditional mutual information contribution ;

Find the variable which minimises the CMI contribution and perform a statistical significance test: if not significant, remove from . The minimum statistic is employed to test for significance against the null hypothesis of conditional independence while controlling the familywise error rate;

Repeat the previous steps until the minimum CMI contribution is not significant or is empty.
The pruning step was introduced by Lizier and Rubinov (2012); remarkably, Sun et al. (2015) proved that this step is essential for the theoretical convergence of the inferred network to the causal network in the GrangerWiener framework; they also rigorously laid out the mathematical assumptions needed for such convergence (see Validation tasks section).
ii.2.4 Step 4: Test relevant variables collectively for statistical significance
The fourth and final step of the algorithm is the computation of the collective transfer entropy from the selected sources past set to the target and the performance of an omnibus test to ensure statistical significance against the null hypothesis of conditional independence. The resulting omnibus pvalue can further be used for correction of the familywise error rate if the inference is carried out for multiple targets. The set is only accepted as a result if all the statistical tests are passed. Importantly, the selected sources set , inferred in the context of , is the final result of the algorithm for a given target process ; the order in which variables were inferred is not relevant.
The statistical tests play a fundamental role in the inference and provide the stopping conditions for the iterations involved in the first and second steps of the algorithm. These stopping conditions are adaptive and change according to the amount of data available (the length of the time series). Given their importance, the statistical tests are described in detail in the following section.
ii.3 Statistical tests
The crucial steps in the inference algorithm rely on determining whether the CMI is positive. However, due to the finite sample size, the CMI estimators may produce nonzero estimates in the case of zero CMI and it may even return negative estimates if the estimator bias is larger than the true CMI (Roulston, 1999; Kraskov et al., 2004). For this reason, statistical tests are required to assess the significance of the CMI estimates against the null hypothesis of no CMI (i.e., conditional independence) (Chávez et al., 2003; Vicente et al., 2011; Lindner et al., 2011; Lizier et al., 2011).
For certain estimators, analytic solutions exist for the finitesample distribution under this null hypothesis (see Lizier (2014)); in the absence of an analytic solution, the null distributions are computed in a nonparametric way by using surrogate time series (Schreiber and Schmitz, 2000). The surrogates are generated to satisfy the null hypothesis by destroying the temporal relationship between the source and the target while preserving the temporal dependencies within the sources.
Finally, the inference algorithm is based on multiple comparisons and requires an appropriate calibration of the statistical tests to achieve the desired familywise error rate (i.e., the probability of making one or more false discoveries, or type I errors, when performing multiple hypotheses tests). The maximum statistic and minimum statistic tests employed in this study were specifically conceived to tackle these challenges.
ii.3.1 Maximum statistic test
The maximum statistic test is a stepdown statistical test^{5}^{5}5A test which proceeds from the smallest to the largest pvalue. When the first nonsignificant pvalue is found, all the larger pvalues are also deemed not significant. used to control the familywise error rate when selecting the past variables for the target and sources embeddings, which involves multiple comparisons.
Let us first consider the first step of the main algorithm and assume that we have picked the candidate variable (from the candidate target past set ) which maximises the CMI contribution. The maximum statistic test described below mirrors this selection process by picking the maximum value among the surrogates. Specifically, let be the maximum contribution (i.e., the maximum statistic); the following algorithm is used to test for statistical significance:

[label=]

For each , generate a surrogate time series and compute the surrogate CMI value . If multiple replications (e.g., trials) are available, surrogate data is generated by shuffling the replications of candidate while keeping the order of replications for the remaining variables intact, thereby preserving the temporal order of samples within a replication but destroying any potential relationship between the candidate and the target. When the number of replications is not sufficient to guarantee enough permutations, the samples are shuffled in time, e.g., via block shuffling;

Find and store the maximum surrogate CMI value (i.e., the surrogate maximum statistic);

Repeat steps a and b a sufficient number of times (according to the desired critical level for statistical significance ) in order to obtain an empirical distribution of surrogate maximum statistic values, which will be used as a null distribution;

Calculate the pvalue for as the fraction of surrogate maximum statistic values that are larger than ;

is deemed significant if the pvalue is smaller than (i.e., the null hypothesis of conditional independence for the candidate variable with the maximum CMI contribution is rejected at level ).
Note that steps ac can be omitted when the null distributions can be derived analytically, e.g., with Gaussian estimators (Barnett and Bossomaier, 2012). The same test is performed during the selection of the variables in the candidate sources past set (step 2 of the main algorithm, with obvious replacements of the symbols).
ii.3.2 Familywise error rate correction
How does the maximum statistic test control the familywise error rate? Intuitively, one or more statistics will exceed a given threshold if and only if the maximum exceeds it. This relationship can be used to obtain an adjusted threshold from the distribution of the maximum statistic under the null hypothesis, which can be used to control the familywise error rate both in the weak and strong sense (Nichols and Hayasaka, 2003).
Let us quantify the false positive rate for a single variable when the maximum statistic at the significance level is employed. For simplicity, the derivation is performed under the hypothesis that the information contributors to the target have been selected in the first iterations of the greedy algorithm and removed from the candidate sources past set . Under this hypothesis, the target is conditionally independent of the remaining variables in given the selected source and target variables. Let be the corresponding CMI estimates and let be the maximum statistic. As discussed above, the estimates might be positive even under the conditional independence hypothesis, due to finitesample effects. Since the estimates are independently obtained by shuffling the time series, they are treated as i.i.d. RVs.
Let be the critical threshold corresponding to the given significance value , i.e., . Then
(1)  
Therefore,  
(2) 
Interestingly, equation (2) shows that the maximum statistic correction is equivalent to the DunnŠidák correction (Šidák, 1967). Performing a Taylor expansion of (2) around and truncating it to the first order yields:
(3) 
which coincides with the false positive rate resulting from the Bonferroni correction (Dickhaus, 2014). The details are presented in the Supplementary materials, where it is also shown that the maximum statistic correction is marginally less stringent than the Bonferroni correction.
Let us now study the effect of the maximum statistic test on the familywise error rate for a single target while accounting for all the iterations performed during the stepdown test, (i.e., is the probability that at least one of the selected sources is a false positive). We have:
(4)  
Therefore,  
(5) 
for the typical small values of used in statistical testing (even in the limit of large ), which shows that effectively controls the familywise error rate for a single target.
ii.3.3 Minimum statistic test
The minimum statistic test is employed during the third main step of the algorithm (pruning step) to remove the selected variables that have become redundant in the context of the final set of selected source past variables , while controlling the familywise error rate. This is necessary because of the multiple comparisons involved in the pruning procedure. The minimum statistic test works identically to the maximum statistic test (replacing “maximum” with “minimum” in the algorithm presented above).
ii.3.4 Omnibus test
Let be the collective transfer entropy from all the selected sources past variables to the target . The value is tested for statistical significance against the null hypothesis of zero transfer entropy (this test is referred to as the omnibus test). The null distribution is built using surrogates time series obtained via shuffling, i.e., using a similar procedure to the one described in the Maximum statistic test section above. Testing all the selected sources collectively is in line with the perspective that the goal of the network inference is to find the set of relevant sources for each node.
ii.3.5 Combining across multiple targets
When the inference is performed on multiple targets, the omnibus pvalues can be employed in further statistical tests to control the familywise error rate for the overall network (e.g., via FDRcorrection (Benjamini and Hochberg, 1995; Dickhaus, 2014), which is implemented in the IDTxl toolbox).
It is important to fully understand the statistical questions and validation procedure implied by this approach. Combining the results across multiple targets by reusing the omnibus test pvalues for the FDRcorrection yields a hierarchical test. The test answers two nested questions: (1) ’which nodes receive any significant overall information transfer?’ and, if any, (2)’what is the structure of the incoming information transfer to each node?’. However, the answers are computed in the reverse order, for the following reason: it would be computationally unfeasible to directly compute the collective transfer entropy from all candidate sources to the target right at the beginning of the network inference process. At this point, the candidate source set usually contains a large number of variables so that estimation will likely fall prey to the curse of dimensionality. Instead, a conservative approximation of the collective information transfer is obtained by considering only a subset of the potential sources, i.e., those deemed significant by the maximum and minimum statistic tests described in the previous sections. Only if this approximation of the total information transfer is also deemed significant by the omnibus test (as well as by the FDR test at the network level), then the subset of significant sources for that target is interpreted posthoc as the local structure of the incoming information transfer. This way, the testing procedure exhibits a hierarchical structure: the omnibus test operates at the higher (global) level concerned with the collective information transfer, whereas the minimum and maximum tests operate at the lower (local) level of individual sourcetarget connections.
Compared to a nonhierarchical analysis with a correction for multiple comparisons across all links (e.g., by networkwide Bonferroni correction or the use of the maximum statistic across all potential links), the above strategy buys both statistical sensitivity (’recall’) and the possibility to trivially parallelise computations across targets. The price to be paid is that a link with a relatively strong information transfer into a node with nonsignificant overall incoming information transfer may get pruned, while a link with relatively weaker information transfer into a node with significant overall incoming information transfer will prevail. This behaviour clearly differs from a correction for multiple comparisons across all links. Arguably, this difference is irrelevant in many practical cases, although it could become noticeable for networks with high average indegree and relatively uniform information transfer across the links. The difference can be reduced by setting a conservative critical threshold for the lowerlevel greedy analysis.
ii.4 Validation tasks
For the purpose of the validation study, the additional assumptions of causal sufficiency^{6}^{6}6Causal sufficiency: The set of observed variables includes all their common causes (or the unobserved common causes have constant values). and the causal Markov condition^{7}^{7}7Causal Markov condition: A variable X is independent of every other past variable conditional on all of its direct causes. were made, such that the inferred network was expected to closely reflect the structural network for a sufficiently large sample size (Sun et al., 2015). Although this is not always the case, experiments under these conditions allow the evaluation of the performance of the algorithm with respect to an expected ground truth. An intuitive definition of these conditions is provided here, while the technical details are discussed at length in Spirtes et al. (1993). Moreover, the intrinsic stochastic nature of the processes makes purely synergistic and purely redundant interactions unlikely (and indeed vanishing for large sample size), thus satisfying the faithfulness condition (Spirtes et al., 1993).
The complete network inference algorithm implemented in the IDTxl toolkit (release v0.4.0) was validated on multiple synthetic datasets, where both the structural connectivity and the dynamics were known. Given the general scope of the toolkit, two dynamical models of broad applicability were chosen: a vector autoregressive process (VAR) and a coupled logistic maps process (CLM); both models are widely used in computational neuroscience (Zalesky et al., 2014; Rubinov et al., 2009; ValdesSosa et al., 2011), macroeconomics (Sims, 1980; Lorenz, 1993), and chaos theory (Strogatz, 2015).
The primary goal was to quantify the scaling of the performance with respect to the size of the network and the length of the time series. Sparse directed random ErdősRényi networks (Erdős and Rényi, 1959) of increasing size ( to nodes) were generated with a link probability to obtain an expected indegree of 3 links. Both the VAR and the CLM stochastic processes were repeatedly simulated on each causal network with increasingly longer time series ( to samples), a single replication (or trial, i.e., ), and with random initial conditions. The performance was evaluated in terms of precision, recall, and specificity in the classification of the links. Further simulations were carried out to investigate the influence of the critical alpha level for statistical significance and the performance of different estimators of conditional mutual information.
ii.4.1 Vector autoregressive process
The specific VAR process used in this study is described by the following discretetime recurrence relation:
(6) 
where denotes the set of causal sources of the target process and a single random lag was used for each source . A Gaussian noise term with mean
was added at each time step ; the noise terms added to different variables were uncorrelated. The selfcoupling coefficient was set to and the crosscoupling coefficients were sampled at random from the unitary interval and normalised such that . This choice of parameters guaranteed that the VAR processes were stable (the resulting spectral radii were between and) and had stationary multivariate Gaussian distributions
(Atay and Karabacak, 2006). As such, the Gaussian estimator implemented in IDTxl was employed for transfer entropy measurements in VAR processes. Note that transfer entropy and Granger causality (Granger, 1969) are equivalent for Gaussian variables (Barnett et al., 2009); therefore, using the Gaussian estimator with our algorithm can be viewed as extending Granger causality in the same multivariate/greedy fashion.ii.4.2 Coupled logistic maps process
The coupled logistic maps process used in this study is described by the following discretetime recurrence relations:
(7) 
At each time step , each node computes the weighted input as a linear combination of its past value and the past of its sources, with the same conditions used for the VAR process on the choice of the random lags and coupling coefficients and . The value
is then computed by applying the logistic map activation function
to the weighted input and adding the Gaussian noise with the same properties used for the VAR process. Notice that the coefficient () used in the logistic map function corresponds to the fullydeveloped chaotic regime. The modulo operation ensures that after the addition of noise. The nearestneighbour estimators were employed for transfer entropy measurements in the analysis of the CLM processes (in particular, Kraskov’s estimator with nearest neighbours (Kraskov et al., 2004) and its extensions (Frenzel and Pompe, 2007; Vejmelka and Paluš, 2008; GómezHerrero et al., 2015)). Nearestneighbour estimators are modelfree and are able to detect nonlinear dependencies in stochastic processes with nonGaussian stationary distributions; fast CPU and GPU implementations are provided by the IDTxl package.Iii Results
iii.1 Influence of network size and length of the time series
The aim of the first analysis was to quantify the scaling of the performance with respect to the size of the network and the length of the time series.
The inferred network was built by adding a directed link from a source node to a target node whenever a significant transfer entropy from to was measured while building the selected sources past set (i.e., whenever ). The critical alpha level for statistical significance was set to and surrogates were used for all experiments unless otherwise stated. The candidate sets for the target as well as the sources were initialised with a maximum lag of five (i.e., , corresponding to the largest lag values used in the definition of the VAR and CLM processes).
The network inference performance was evaluated in comparison to the known underlying structural network as a binary classification task, using standard statistics based on the number of true positives
(TP, i.e., correctly classified existing links),
false positives (FP, i.e., absent links falsely classified as existing), true negatives (TN, i.e., correctly classified absent links), and false negatives (FN, i.e., existing links falsely classified as absent). The following standard statistics were employed in the evaluation: precision

 recall

 specificity

The plots in Figure 3
summarise the results in terms of precision and recall, while the specificity is additionally plotted in the Supplementary materials. For both types of dynamics, the performance increased with the number of samples and decreased with the size of the network.
For shorter time series ( and ), the recall was the most affected performance measure as a function of and , while the precision and the specificity were always close to optimal ( on average). For longer time series ( ), high performance according to all measures was achieved for both the VAR and CLM processes, regardless of the size of the network. The high precision and specificity are due to the effective control of the false positives, in accordance with the strict statistical significance level (the influence of is further discussed in the following sections). The inference algorithm was therefore conservative in the classification of the links.
VAR  CLM 

iii.2 Validation of false positive rate
The critical alpha level for statistical significance is a parameter of the algorithm which is designed to control the number o false positives in the network inference. As discussed in the statistical tests section in the Methods, controls the probability that a target is a false positive, i.e., that at least one of its sources is a false positive. This approach is in line with the perspective that the goal of the network inference is to find the set of relevant sources for each node.
A validation study was carried out to verify that the final number of false positives is consistent with the desired level after multiple statistical tests are performed. The false positive rate (i.e. ) was computed after performing the inference on empty networks, where every inferred link is a false positive by definition (i.e., under the complete null hypothesis). The rate was in good accordance with the critical alpha threshold for all network sizes, as shown in Figure 4.
iii.3 Influence of critical level for statistical significance
Given the conservative results obtained for both the VAR and CLM processes (Figure 3), a natural question is to what extent the recall could be improved by increasing the critical alpha level and to what extent the precision would be negatively affected as a side effect.
In order to elucidate this tradeoff, the analysis described above (Figure 3) was repeated for increasing values of , with results shown in Figure 5. For the shortest time series ( ), increasing resulted in a higher recall and a lower precision, as expected; on the other hand, for the longest time series ( ), the performance measures were not significantly affected. Interestingly, for the intermediate case ( ), increasing resulted in higher recall without negatively affecting the precision.
iii.4 Inference of coupling lags
So far, the performance evaluation focused on the identification the correct set of sources for each target node, regardless of the coupling lags. However, since the identification of the correct coupling lags is particularly relevant in neuroscience (see Wibral et al. (2013) and references therein), the performance of the algorithm in identifying the correct coupling lags was additionally investigated.
By construction, a single coupling lag was imposed between each pair of processes (chosen at random between one and five discrete time steps, as described in the Methods). In line with the previous results, samples were sufficient to achieve nearly optimal performance, regardless of the size of the network and for both the VAR and the CLM processes (Figure 6).
VAR  CLM 

iii.5 Estimators
Given its speed, the Gaussian estimator is often used for large datasets or as a first exploratory step, even when the stationary distribution cannot be assumed to be Gaussian. The availability of the ground truth allowed to compare the performance of the Gaussian estimator and the nearestneighbour estimator on the nonlinear CLM process, which does not satisfy the Gaussian assumption. As expected, the performance of the Gaussian estimator was lower than the performance of the nearestneighbour estimator for all network sizes (Figure 7).
The hierarchical tests introduced in the Methods section allow running the network inference algorithm in parallel on a highperformance computing cluster. Such parallelisation is especially needed when employing the nearestneighbour estimator. In particular, each target node can be analysed in parallel on a CPU (employing one or more cores) or a GPU, which is made possible by the CPU and GPU estimators provided by the IDTxl package (a summary of the CPU and GPU runtimes is provided in the Supplementary materials).
Iv Discussion
The algorithm presented in this paper provides robust statistical tests for network inference to control the false positive rate. These tests are compatible with parallel computation on highperformance computing clusters, which enabled the validation study on synthetic sparse networks of increasing size ( to nodes), using different dynamics (linear autoregressive processes and nonlinear coupled logistic maps) and increasingly longer time series ( to samples). Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and MEG experiments. The results demonstrate that the statistical tests achieve the desired false positive rate and successfully address the multiplecomparison problems inherent in network inference tasks (Figure 4).
The ability to control the false positives while building connectomes is a crucial prerequisite for the application of complex network measures, to the extent that Zalesky et al. (2016) concluded that “specificity is at least twice as important as sensitivity [i.e., recall] when estimating key properties of brain networks, including topological measures of network clustering, network efficiency and network modularity”. The reason is that false positives occur more prevalently between network modules than within them and the spurious intermodular connections have a dramatic impact on network topology (Zalesky et al., 2016).
The tradeoff between precision and recall when relaxing the statistical significance threshold was further investigated (Figure 5). When only samples were used, the average recall gain was more than five times smaller than the average precision loss. In our opinion, this result is possibly due to the sparsity of the networks used in this study and suggests a conservative choice of the threshold for sparse networks and short time series. The tradeoff was reversed for longer time series: when samples were used, the average recall gain was more than five times larger than the average precision loss. Finally, for samples, high precision and recall were achieved ( on average) for both the vector autoregressive and the coupled logistic maps processes, regardless of the statistical significance threshold.
For both types of dynamics, the network inference performance increased with the length of the time series and decreased with the size of the network (Figure 3
). This is to be expected since larger systems require more statistical tests and hence stricter conditions to control the familywise error rate (false positives). Specifically, larger networks result in wider null distributions of the maximum statistic (i.e., larger variance), whereas longer time series have the opposite effect. Therefore, for large networks and short time series, controlling the false positives can have a negative impact on the ability to identify the true positives, particularly when the effect size (i.e., the transfer entropy value) is small.
In addition, the superior ability of the nearestneighbour estimator over the Gaussian estimator in detecting nonlinear dependencies was quantified. There is a critical motivation for this comparison: the general applicability of the nearestneighbour estimators comes at the price of higher computational complexity and a significantly longer runtime, so that the Gaussian estimator is often used for large datasets (or at least as a first exploratory step), even when the Gaussian hypothesis is not justified. To investigate such scenario, the Gaussian estimator was tested on the nonlinear logistic map processes: while the resulting recall was significantly lower than the nearestneighbour estimator for all network sizes, it was nonetheless able to identify over half of the links for a sufficiently large number () of time samples (Figure 7).
The stationarity assumption about the time series corresponds to assuming a single regime of neuronal activity in real brain recordings. If multiple regimes are recorded, which is typical in experimental settings (e.g., sequences of tasks or repeated presentation of stimuli interleaved with resting time windows), different stationary regimes can be studied by performing the analysis within each time window. The networks obtained in different time windows can either be studied separately and compared against each other or collectively interpreted as a single evolving temporal network. To obtain a sufficient amount of observations per window, multiple replications of the experiment under the same conditions are typically carried out. Replications can be assumed to be cyclostationary and estimation techniques exploiting this property have been proposed
(GómezHerrero et al., 2015; Wollstadt et al., 2014); these estimators are also available in the IDTxl Python package. The convergence to the (unknown) causal network was only proven under the hypotheses of stationarity, causal sufficiency, and the causal Markov condition (Sun et al., 2015). However, conditional independence holds under milder assumptions (Runge, 2018) and the absence of links is valid under general conditions. The conditional independence relationships can, therefore, be used to exclude variables in following interventionbased causal experiments, making network inference methods valuable for exploratory studies.In fact, the directed network is only one part of the model and provides the scaffold over which the informationtheoretic measures are computed. Therefore, even if the structure of a system is known and there is no need for network inference, information theory can still provide nontrivial insights on the distributed computation by modelling the information storage, transfer, and modification within the system (Lizier, 2013). This decomposition of the predictive information into the active information storage and transfer entropy components is one out of many alternatives within the framework proposed by Chicharro and Ledberg (2012). Arguably, this decomposition reflects the segregationintegration dichotomy that has long characterised the interpretation of brain function (Zeki and Shipp, 1988; Sporns, 2010). Information theory has the potential to provide a quantitative definition of these fundamental but still unsettled concepts. In addition, information theory provides a new way of testing fundamental computational theories in neuroscience, e.g., predictive coding (BrodskiGuerniero et al., 2017).
As such, informationtheoretic methods should not be seen as opposed to modelbased approaches, but complementary to them (Friston et al., 2013)
. If certain physically motivated parametric models are assumed, the two approaches are equivalent for network inference: maximising the loglikelihood is asymptotically equivalent to maximising the transfer entropy
(Barnett and Bossomaier, 2012; Cliff et al., 2018). Moreover, different approaches can be combined, e.g., the recent largescale application of spectral DCM was made possible by using functional connectivity models to place prior constraints on the parameter space (Razi et al., 2017). Networks inferred using bivariate transfer entropy have also been employed to reduce the model space prior to DCM analysis (Chan et al., 2017).In conclusion, the continuous evolution and combination of methods show that network inference from time series is an active field of research and there is a current trend of larger validation studies, statistical significance improvements, and reduction of computational complexity. Informationtheoretic approaches require efficient tools to employ nearestneighbour estimators on large datasets of continuousvalued time series, which are ubiquitous in largescale brain recordings (calcium imaging, EEG, MEG, fMRI). The algorithm presented in this paper is compatible with parallel computation on highperformance computing clusters, which enabled the study of synthetic nonlinear systems of nodes and samples. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, bringing typical EEG and MEG experiments into scope for future informationtheoretic network inference studies.
V Acknowledgements
This research was supported by: Universities Australia/German Academic Exchange Service (DAAD) AustraliaGermany Joint Research Cooperation Scheme grant: “Measuring neural information synthesis and its impairment”, Grant/Award Number: PPP Australia ProjektID 57216857; and Australian Research Council DECRA grant DE160100630. The authors acknowledge the Sydney Informatics Hub and the University of Sydney’s highperformance computing cluster Artemis for providing the highperformance computing resources that have contributed to the research results reported within this paper. Furthermore, the authors thank Aaron J. Gutknecht for commenting on a draft of this paper.
Vi Author Contributions
Leonardo Novelli: Conceptualization; Data curation; Formal analysis; Investigation; Software; Validation; Visualization; Writing — original draft. Patricia Wollstadt: Conceptualization; Software; Writing — review & editing. Pedro Mediano: Software; Writing — review & editing. Michael Wibral: Conceptualization; Funding acquisition; Methodology; Software; Supervision; Writing — review & editing. Joseph T. Lizier: Conceptualization; Funding acquisition; Methodology; Software; Supervision; Writing — review & editing.
References
 Friston (1994) K. J. Friston, Human Brain Mapping 2, 56 (1994).
 Ay and Polani (2008) N. Ay and D. Polani, Advances in Complex Systems 11, 17 (2008).
 Vicente et al. (2011) R. Vicente, M. Wibral, M. Lindner, and G. Pipa, Journal of Computational Neuroscience 30, 45 (2011).
 Lizier and Rubinov (2012) J. T. Lizier and M. Rubinov, Multivariate construction of effective computational networks from observational data, Tech. Rep. Preprint 25/2012 (Max Planck Institute for Mathematics in the Sciences, 2012).
 Shannon (1948) C. E. Shannon, Bell System Technical Journal 27, 379 (1948).
 Cover and Thomas (2005) T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd ed. (John Wiley & Sons, Inc., Hoboken, NJ, USA, 2005) p. 748.
 Schreiber (2000) T. Schreiber, Physical Review Letters 85, 461 (2000).
 Paluš et al. (1993) M. Paluš, V. Albrecht, and I. Dvořák, Physics Letters A 175, 203 (1993).
 Lizier (2013) J. T. Lizier, The Local Information Dynamics of Distributed Computation in Complex Systems, Springer Theses (Springer Berlin Heidelberg, Berlin, Heidelberg, 2013) p. 311.
 Roulston (1999) M. S. Roulston, Physica D: Nonlinear Phenomena 125, 285 (1999).
 Paninski (2003) L. Paninski, Neural Computation 15, 1191 (2003).
 Dickhaus (2014) T. Dickhaus, Simultaneous Statistical Inference (Springer Berlin Heidelberg, Berlin, Heidelberg, 2014) p. 180.
 Lindner et al. (2011) M. Lindner, R. Vicente, V. Priesemann, and M. Wibral, BMC Neuroscience 12, 119 (2011).
 Bossomaier et al. (2016) T. Bossomaier, L. Barnett, M. Harré, and J. T. Lizier, An Introduction to Transfer Entropy (Springer International Publishing, Cham, 2016) p. 190.
 Vlachos and Kugiumtzis (2010) I. Vlachos and D. Kugiumtzis, Physical Review E 82, 016207 (2010).
 Faes et al. (2011) L. Faes, G. Nollo, and A. Porta, Physical Review E 83, 051112 (2011).
 Sun et al. (2015) J. Sun, D. Taylor, and E. M. Bollt, SIAM Journal on Applied Dynamical Systems 14, 73 (2015).
 Lizier et al. (2008) J. T. Lizier, M. Prokopenko, and A. Y. Zomaya, Physical Review E 77, 026110 (2008).
 Lizier et al. (2010) J. T. Lizier, M. Prokopenko, and A. Y. Zomaya, Chaos 20, 037109 (2010).
 Runge et al. (2012) J. Runge, J. Heitzig, V. Petoukhov, and J. Kurths, Physical Review Letters 108, 258701 (2012).
 Runge et al. (2018) J. Runge, P. Nowack, M. Kretschmer, S. Flaxman, and D. Sejdinovic, arXiv Preprint (2018), arXiv: 1702.07007.
 Montalto et al. (2014) A. Montalto, L. Faes, and D. Marinazzo, PLoS ONE 9, e109462 (2014).
 Kim et al. (2016) P. Kim, J. Rogers, J. Sun, and E. M. Bollt, Journal of Computational and Nonlinear Dynamics 12, 011008 (2016).
 Wollstadt et al. (2019) P. Wollstadt, J. T. Lizier, R. Vicente, C. Finn, M. MartínezZarzuela, P. Mediano, L. Novelli, and M. Wibral, Journal of Open Source Software 4, 1081 (2019).
 Takens (1981) F. Takens, in Dynamical Systems and Turbulence, edited by D. Rand and L. Young (Springer Berlin Heidelberg, 1981) pp. 366–381.
 Lizier et al. (2012) J. T. Lizier, M. Prokopenko, and A. Y. Zomaya, Information Sciences 208, 39 (2012).
 Wibral et al. (2013) M. Wibral, N. Pampu, V. Priesemann, F. Siebenhühner, H. Seiwert, M. Lindner, J. T. Lizier, and R. Vicente, PLoS ONE 8, e55809 (2013).
 Garland et al. (2016) J. Garland, R. G. James, and E. Bradley, Physical Review E 93, 022221 (2016).
 Verdes (2005) P. F. Verdes, Physical Review E 72, 026222 (2005).
 Vakorin et al. (2009) V. A. Vakorin, O. A. Krakovska, and A. R. McIntosh, Journal of Neuroscience Methods 184, 152 (2009).
 Kraskov et al. (2004) A. Kraskov, H. Stögbauer, and P. Grassberger, Physical Review E 69, 066138 (2004).
 Chávez et al. (2003) M. Chávez, J. Martinerie, and M. Le Van Quyen, Journal of Neuroscience Methods 124, 113 (2003).
 Lizier et al. (2011) J. T. Lizier, J. Heinzle, A. Horstmann, J.D. Haynes, and M. Prokopenko, Journal of Computational Neuroscience 30, 85 (2011).
 Lizier (2014) J. T. Lizier, Frontiers in Robotics and AI 1, 11 (2014).
 Schreiber and Schmitz (2000) T. Schreiber and A. Schmitz, Physica D: Nonlinear Phenomena 142, 346 (2000).
 Barnett and Bossomaier (2012) L. Barnett and T. Bossomaier, Physical Review Letters 109, 138105 (2012).
 Nichols and Hayasaka (2003) T. Nichols and S. Hayasaka, Statistical Methods in Medical Research 12, 419 (2003).
 Šidák (1967) Z. Šidák, Journal of the American Statistical Association 62, 626 (1967).
 Benjamini and Hochberg (1995) Y. Benjamini and Y. Hochberg, Journal of the Royal Statistical Society. Series B (Methodological) 57, 289 (1995).
 Spirtes et al. (1993) P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search, Lecture Notes in Statistics, Vol. 81 (Springer New York, 1993).
 Zalesky et al. (2014) A. Zalesky, A. Fornito, L. Cocchi, L. L. Gollo, and M. Breakspear, Proceedings of the National Academy of Sciences 111, 10341 (2014).
 Rubinov et al. (2009) M. Rubinov, O. Sporns, C. van Leeuwen, and M. Breakspear, BMC Neuroscience 10, 55 (2009).
 ValdesSosa et al. (2011) P. A. ValdesSosa, A. Roebroeck, J. Daunizeau, and K. J. Friston, NeuroImage 58, 339 (2011).
 Sims (1980) C. A. Sims, Econometrica 48, 1 (1980).
 Lorenz (1993) H.W. Lorenz, in Nonlinear Dynamical Economics and Chaotic Motion (Springer Berlin Heidelberg, 1993) pp. 119–166.
 Strogatz (2015) S. H. Strogatz, Nonlinear Dynamics and Chaos (CRC Press, Boca Raton, 2015) p. 531.
 Erdős and Rényi (1959) P. Erdős and A. Rényi, Publicationes Mathematicae Debrecen 6, 290 (1959).
 Atay and Karabacak (2006) F. M. Atay and Ö. Karabacak, SIAM Journal on Applied Dynamical Systems 5, 508 (2006).
 Granger (1969) C. W. J. Granger, Econometrica 37, 424 (1969).
 Barnett et al. (2009) L. Barnett, A. B. Barrett, and A. K. Seth, Physical Review Letters 103, 238701 (2009).
 Frenzel and Pompe (2007) S. Frenzel and B. Pompe, Physical Review Letters 99, 204101 (2007).
 Vejmelka and Paluš (2008) M. Vejmelka and M. Paluš, Physical Review E 77, 026214 (2008).
 GómezHerrero et al. (2015) G. GómezHerrero, W. Wu, K. Rutanen, M. Soriano, G. Pipa, and R. Vicente, Entropy 17, 1958 (2015).
 Zalesky et al. (2016) A. Zalesky, A. Fornito, L. Cocchi, L. L. Gollo, M. P. van den Heuvel, and M. Breakspear, NeuroImage 142, 407 (2016).
 Wollstadt et al. (2014) P. Wollstadt, M. MartínezZarzuela, R. Vicente, F. J. DíazPernas, and M. Wibral, PLoS ONE 9, e102833 (2014).
 Runge (2018) J. Runge, Chaos 28, 075310 (2018).
 Chicharro and Ledberg (2012) D. Chicharro and A. Ledberg, Physical Review E 86, 041901 (2012).
 Zeki and Shipp (1988) S. Zeki and S. Shipp, Nature 335, 311 (1988).
 Sporns (2010) O. Sporns, Networks of the Brain (MIT Press, 2010) p. 424.
 BrodskiGuerniero et al. (2017) A. BrodskiGuerniero, G.F. Paasch, P. Wollstadt, I. Özdemir, J. T. Lizier, and M. Wibral, The Journal of Neuroscience 37, 8273 (2017).
 Friston et al. (2013) K. J. Friston, R. Moran, and A. K. Seth, Current Opinion in Neurobiology 23, 172 (2013).
 Cliff et al. (2018) O. Cliff, M. Prokopenko, and R. Fitch, Entropy 20, 51 (2018).
 Razi et al. (2017) A. Razi, M. L. Seghier, Y. Zhou, P. McColgan, P. Zeidman, H.J. Park, O. Sporns, G. Rees, and K. J. Friston, Network Neuroscience 1, 222 (2017).
 Chan et al. (2017) J. S. Chan, M. Wibral, P. Wollstadt, C. Stawowsky, M. Brandl, S. Helbling, M. Naumer, and J. Kaiser, bioRxiv Preprint , 178095 (2017).
Vii Supplementary materials
vii.1 Relationship with Bonferroni correction
Performing a Taylor expansion of (2) around yields:
(8) 
Truncating the Taylor series at yields the firstorder approximation
(9) 
which coincides with the false positive rate resulting from the Bonferroni correction (Dickhaus, 2014).
Moreover, since the summands in (8) are positive for every , the Taylor series is lowerbounded by any truncated series. In particular, the false positive rate resulting from the Bonferroni correction is a lower bound for the (the false positive rate for a single variable resulting from the maximum statistic test), i.e., the maximum statistic correction is less stringent than the Bonferroni correction.
vii.2 Other plots
Figure 8 replots the precision and recall from Figure 3 using different subplots for each sample size, and additionally shows the specificity. Similarly, Figure 9 replots the precision and recall from Figure 5 using different subplots for each sample size.
vii.3 Runtime on CPU and GPU
The algorithm was either run using a single core per target (on different Intel Xeon CPUs with similar characteristics:  GHz), or using a whole dedicated GPU per target (NVIDIA V100 SXM2, GB RAM). These computations were performed on the Artemis computing cluster made available by the Sydney Informatics Hub at The University of Sydney. The maximum CPU and GPU runtimes for a single target are shown in Table 1, which summarises the results for different time series lengths and different network sizes.
Sample size  Network size  max GPU time per node  max CPU time per node 

(h)  (h)  
Comments
There are no comments yet.