Search Results

Documents authored by Lafond, Manuel


Document
Finding Maximum Common Contractions Between Phylogenetic Networks

Authors: Bertrand Marchand, Nadia Tahiri, Olivier Tremblay-Savard, and Manuel Lafond

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
In this paper, we lay the groundwork on the comparison of phylogenetic networks based on edge contractions and expansions as edit operations, as originally proposed by Robinson and Foulds to compare trees. We prove that these operations connect the space of all phylogenetic networks on the same set of leaves, even if we forbid contractions that create cycles. This allows to define an operational distance on this space, as the minimum number of contractions and expansions required to transform one network into another. We highlight the difference between this distance and the computation of the maximum common contraction between two networks. Given its ability to outline a common structure between them, which can provide valuable biological insights, we study the algorithmic aspects of the latter. We first prove that computing a maximum common contraction between two networks is NP-hard, even when the maximum degree, the size of the common contraction, or the number of leaves is bounded. We also provide lower bounds to the problem based on the Exponential-Time Hypothesis. Nonetheless, we do provide a polynomial-time algorithm for weakly galled trees, a generalization of galled trees.

Cite as

Bertrand Marchand, Nadia Tahiri, Olivier Tremblay-Savard, and Manuel Lafond. Finding Maximum Common Contractions Between Phylogenetic Networks. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 16:1-16:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{marchand_et_al:LIPIcs.WABI.2024.16,
  author =	{Marchand, Bertrand and Tahiri, Nadia and Tremblay-Savard, Olivier and Lafond, Manuel},
  title =	{{Finding Maximum Common Contractions Between Phylogenetic Networks}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{16:1--16:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.16},
  URN =		{urn:nbn:de:0030-drops-206606},
  doi =		{10.4230/LIPIcs.WABI.2024.16},
  annote =	{Keywords: Phylogenetic networks, contractions, algorithms, weakly galled trees}
}
Document
The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees

Authors: Alitzel López Sánchez, José Antonio Ramírez-Rafael, Alejandro Flores-Lamas, Maribel Hernández-Rosales, and Manuel Lafond

Published in: LIPIcs, Volume 312, 24th International Workshop on Algorithms in Bioinformatics (WABI 2024)


Abstract
In this study, we investigate the problem of comparing gene trees reconciled with the same species tree using a novel semi-metric, called the Path-Label Reconciliation (PLR) dissimilarity measure. This approach not only quantifies differences in the topology of reconciled gene trees, but also considers discrepancies in predicted ancestral gene-species maps and speciation/duplication events, offering a refinement of existing metrics such as Robinson-Foulds (RF) and their labeled extensions LRF and ELRF. A tunable parameter α also allows users to adjust the balance between its species map and event labeling components. We show that PLR can be computed in linear time and that it is a semi-metric. We also discuss the diameters of reconciled gene tree measures, which are important in practice for normalization, and provide initial bounds on PLR, LRF, and ELRF. To validate PLR, we simulate reconciliations and perform comparisons with LRF and ELRF. The results show that PLR provides a more evenly distributed range of distances, making it less susceptible to overestimating differences in the presence of small topological changes, while at the same time being computationally efficient. Our findings suggest that the theoretical diameter is rarely reached in practice. The PLR measure advances phylogenetic reconciliation by combining theoretical rigor with practical applicability. Future research will refine its mathematical properties, explore its performance on different tree types, and integrate it with existing bioinformatics tools for large-scale evolutionary analyses. The open source code is available at: https://pypi.org/project/parle/.

Cite as

Alitzel López Sánchez, José Antonio Ramírez-Rafael, Alejandro Flores-Lamas, Maribel Hernández-Rosales, and Manuel Lafond. The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees. In 24th International Workshop on Algorithms in Bioinformatics (WABI 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 312, pp. 20:1-20:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{lopezsanchez_et_al:LIPIcs.WABI.2024.20,
  author =	{L\'{o}pez S\'{a}nchez, Alitzel and Ram{\'\i}rez-Rafael, Jos\'{e} Antonio and Flores-Lamas, Alejandro and Hern\'{a}ndez-Rosales, Maribel and Lafond, Manuel},
  title =	{{The Path-Label Reconciliation (PLR) Dissimilarity Measure for Gene Trees}},
  booktitle =	{24th International Workshop on Algorithms in Bioinformatics (WABI 2024)},
  pages =	{20:1--20:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-340-9},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{312},
  editor =	{Pissis, Solon P. and Sung, Wing-Kin},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2024.20},
  URN =		{urn:nbn:de:0030-drops-206645},
  doi =		{10.4230/LIPIcs.WABI.2024.20},
  annote =	{Keywords: Reconciliation, gene trees, species trees, evolutionary scenarios}
}
Document
On the Complexity of Temporal Arborescence Reconfiguration

Authors: Riccardo Dondi and Manuel Lafond

Published in: LIPIcs, Volume 292, 3rd Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2024)


Abstract
We analyze the complexity of Arborescence Reconfiguration on temporal digraphs (Temporal Arborescence Reconfiguration). The problem, given two temporal arborescences in a temporal digraph, asks for the minimum number of arc flips, i.e. arc exchanges, that result in a sequence of temporal arborescences that transforms one into the other. We analyze the complexity of the problem, taking into account also its approximation and parameterized complexity, even in restricted cases. First, we solve an open problem showing that Temporal Arborescence Reconfiguration is NP-hard for two timestamps. Then we show that even if the two temporal arborescences differ only by two arcs, then the problem is not approximable within factor bln|V(D)|, for any constant 0 < b < 1, where V(D) is the set of vertices of the temporal arborescences. Finally, we prove that Temporal Arborescence Reconfiguration is W[1]-hard when parameterized by the number of arc flips needed to transform one temporal arborescence into the other.

Cite as

Riccardo Dondi and Manuel Lafond. On the Complexity of Temporal Arborescence Reconfiguration. In 3rd Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 292, pp. 10:1-10:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{dondi_et_al:LIPIcs.SAND.2024.10,
  author =	{Dondi, Riccardo and Lafond, Manuel},
  title =	{{On the Complexity of Temporal Arborescence Reconfiguration}},
  booktitle =	{3rd Symposium on Algorithmic Foundations of Dynamic Networks (SAND 2024)},
  pages =	{10:1--10:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-315-7},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{292},
  editor =	{Casteigts, Arnaud and Kuhn, Fabian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAND.2024.10},
  URN =		{urn:nbn:de:0030-drops-198888},
  doi =		{10.4230/LIPIcs.SAND.2024.10},
  annote =	{Keywords: Arborescence, Temporal Graphs, Graph Algorithms, Parameterized Complexity, Approximation Complexity}
}
Document
An FPT Algorithm for Temporal Graph Untangling

Authors: Riccardo Dondi and Manuel Lafond

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
Several classical combinatorial problems have been considered and analysed on temporal graphs. Recently, a variant of Vertex Cover on temporal graphs, called MinTimelineCover, has been introduced to summarize timeline activities in social networks. The problem asks to cover every temporal edge while minimizing the total span of the vertices (where the span of a vertex is the length of the timestamp interval it must remain active in). While the problem has been shown to be NP-hard even in very restricted cases, its parameterized complexity has not been fully understood. The problem is known to be in FPT under the span parameter only for graphs with two timestamps, but the parameterized complexity for the general case is open. We settle this open problem by giving an FPT algorithm that is based on a combination of iterative compression and a reduction to the Digraph Pair Cut problem, a powerful problem that has received significant attention recently.

Cite as

Riccardo Dondi and Manuel Lafond. An FPT Algorithm for Temporal Graph Untangling. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 12:1-12:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{dondi_et_al:LIPIcs.IPEC.2023.12,
  author =	{Dondi, Riccardo and Lafond, Manuel},
  title =	{{An FPT Algorithm for Temporal Graph Untangling}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{12:1--12:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.12},
  URN =		{urn:nbn:de:0030-drops-194311},
  doi =		{10.4230/LIPIcs.IPEC.2023.12},
  annote =	{Keywords: Temporal Graphs, Vertex Cover, Graph Algorithms, Parameterized Complexity}
}
Document
Parameterized Complexity of Domination Problems Using Restricted Modular Partitions

Authors: Manuel Lafond and Weidong Luo

Published in: LIPIcs, Volume 272, 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)


Abstract
For a graph class 𝒢, we define the 𝒢-modular cardinality of a graph G as the minimum size of a vertex partition of G into modules that each induces a graph in 𝒢. This generalizes other module-based graph parameters such as neighborhood diversity and iterated type partition. Moreover, if 𝒢 has bounded modular-width, the W[1]-hardness of a problem in 𝒢-modular cardinality implies hardness on modular-width, clique-width, and other related parameters. Several FPT algorithms based on modular partitions compute a solution table in each module, then combine each table into a global solution. This works well when each table has a succinct representation, but as we argue, when no such representation exists, the problem is typically W[1]-hard. We illustrate these ideas on the generic (α, β)-domination problem, which is a generalization of known domination problems such as Bounded Degree Deletion, k-Domination, and α-Domination. We show that for graph classes 𝒢 that require arbitrarily large solution tables, these problems are W[1]-hard in the 𝒢-modular cardinality, whereas they are fixed-parameter tractable when they admit succinct solution tables. This leads to several new positive and negative results for many domination problems parameterized by known and novel structural graph parameters such as clique-width, modular-width, and cluster-modular cardinality.

Cite as

Manuel Lafond and Weidong Luo. Parameterized Complexity of Domination Problems Using Restricted Modular Partitions. In 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 272, pp. 61:1-61:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.MFCS.2023.61,
  author =	{Lafond, Manuel and Luo, Weidong},
  title =	{{Parameterized Complexity of Domination Problems Using Restricted Modular Partitions}},
  booktitle =	{48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)},
  pages =	{61:1--61:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-292-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{272},
  editor =	{Leroux, J\'{e}r\^{o}me and Lombardy, Sylvain and Peleg, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2023.61},
  URN =		{urn:nbn:de:0030-drops-185958},
  doi =		{10.4230/LIPIcs.MFCS.2023.61},
  annote =	{Keywords: modular-width, parameterized algorithms, W-hardness, 𝒢-modular cardinality}
}
Document
Predicting Horizontal Gene Transfers with Perfect Transfer Networks

Authors: Alitzel López Sánchez and Manuel Lafond

Published in: LIPIcs, Volume 242, 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)


Abstract
Horizontal gene transfer inference approaches are usually based on gene sequences: parametric methods search for patterns that deviate from a particular genomic signature, while phylogenetic methods use sequences to reconstruct the gene and species trees. However, it is well-known that sequences have difficulty identifying ancient transfers since mutations have enough time to erase all evidence of such events. In this work, we ask whether character-based methods can predict gene transfers. Their advantage over sequences is that homologous genes can have low DNA similarity, but still have retained enough important common motifs that allow them to have common character traits, for instance the same functional or expression profile. A phylogeny that has two separate clades that acquired the same character independently might indicate the presence of a transfer even in the absence of sequence similarity. We introduce perfect transfer networks, which are phylogenetic networks that can explain the character diversity of a set of taxa. This problem has been studied extensively in the form of ancestral recombination networks, but these only model hybridation events and do not differentiate between direct parents and lateral donors. We focus on tree-based networks, in which edges representing vertical descent are clearly distinguished from those that represent horizontal transmission. Our model is a direct generalization of perfect phylogeny models to such networks. Our goal is to initiate a study on the structural and algorithmic properties of perfect transfer networks. We then show that in polynomial time, one can decide whether a given network is a valid explanation for a set of taxa, and show how, for a given tree, one can add transfer edges to it so that it explains a set of taxa.

Cite as

Alitzel López Sánchez and Manuel Lafond. Predicting Horizontal Gene Transfers with Perfect Transfer Networks. In 22nd International Workshop on Algorithms in Bioinformatics (WABI 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 242, pp. 3:1-3:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lopezsanchez_et_al:LIPIcs.WABI.2022.3,
  author =	{L\'{o}pez S\'{a}nchez, Alitzel and Lafond, Manuel},
  title =	{{Predicting Horizontal Gene Transfers with Perfect Transfer Networks}},
  booktitle =	{22nd International Workshop on Algorithms in Bioinformatics (WABI 2022)},
  pages =	{3:1--3:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-243-3},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{242},
  editor =	{Boucher, Christina and Rahmann, Sven},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2022.3},
  URN =		{urn:nbn:de:0030-drops-170376},
  doi =		{10.4230/LIPIcs.WABI.2022.3},
  annote =	{Keywords: Horizontal gene transfer, tree-based networks, perfect phylogenies, character-based, gene-expression, indirect phylogenetic methods}
}
Document
How Brokers Can Optimally Abuse Traders

Authors: Manuel Lafond

Published in: LIPIcs, Volume 226, 11th International Conference on Fun with Algorithms (FUN 2022)


Abstract
Traders buy and sell financial instruments in hopes of making profit, and brokers are responsible for the transaction. There are several hypotheses and conspiracy theories arguing that in some situations, brokers want their traders to lose money. For instance, a broker may want to protect the positions of a privileged customer. Another example is that some brokers take positions opposite to their traders', in which case they make money whenever their traders lose money. These are reasons for which brokers might manipulate prices in order to maximize the losses of their traders. In this paper, our goal is to perform this shady task optimally - or at least to check whether this can actually be done algorithmically. Assuming total control over the price of an asset (ignoring the usual aspects of finance such as market conditions, external influence or stochasticity), we show how in quadratic time, given a set of trades specified by a stop-loss and a take-profit price, a broker can find a maximum loss price movement. We also look at an online trade model where broker and trader exchange turns, each trying to make a profit. We show in which condition either side can make a profit, and that the best option for the trader is to never trade.

Cite as

Manuel Lafond. How Brokers Can Optimally Abuse Traders. In 11th International Conference on Fun with Algorithms (FUN 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 226, pp. 18:1-18:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{lafond:LIPIcs.FUN.2022.18,
  author =	{Lafond, Manuel},
  title =	{{How Brokers Can Optimally Abuse Traders}},
  booktitle =	{11th International Conference on Fun with Algorithms (FUN 2022)},
  pages =	{18:1--18:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-232-7},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{226},
  editor =	{Fraigniaud, Pierre and Uno, Yushi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2022.18},
  URN =		{urn:nbn:de:0030-drops-159882},
  doi =		{10.4230/LIPIcs.FUN.2022.18},
  annote =	{Keywords: Algorithms, trading, graph theory}
}
Document
Genomic Problems Involving Copy Number Profiles: Complexity and Algorithms

Authors: Manuel Lafond, Binhai Zhu, and Peng Zou

Published in: LIPIcs, Volume 161, 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)


Abstract
Recently, due to the genomic sequence analysis in several types of cancer, genomic data based on copy number profiles (CNP for short) are getting more and more popular. A CNP is a vector where each component is a non-negative integer representing the number of copies of a specific segment of interest. The motivation is that in the late stage of certain types of cancer, the genomes are progressing rapidly by segmental duplications and deletions, and hence obtaining the exact sequences becomes difficult. Instead, the number of copies of important segments can be predicted from expression analysis and carries important biological information. Therefore, significant research has recently been devoted to the analysis of genomic data represented as CNP’s. In this paper, we present two streams of results. The first is the negative results on two open problems regarding the computational complexity of the Minimum Copy Number Generation (MCNG) problem posed by Qingge et al. in 2018. The Minimum Copy Number Generation (MCNG) is defined as follows: given a string S in which each character represents a gene or segment, and a CNP C, compute a string T from S, with the minimum number of segmental duplications and deletions, such that cnp(T)=C. It was shown by Qingge et al. that the problem is NP-hard if the duplications are tandem and they left the open question of whether the problem remains NP-hard if arbitrary duplications and/or deletions are used. We answer this question affirmatively in this paper; in fact, we prove that it is NP-hard to even obtain a constant factor approximation. This is achieved through a general-purpose lemma on set-cover reductions that require an exact cover in one direction, but not the other, which might be of independent interest. We also prove that the corresponding parameterized version is W[1]-hard, answering another open question by Qingge et al. The other result is positive and is based on a new (and more general) problem regarding CNP’s. The Copy Number Profile Conforming (CNPC) problem is formally defined as follows: given two CNP’s C₁ and C₂, compute two strings S₁ and S₂ with cnp(S₁)=C₁ and cnp(S₂)=C₂ such that the distance between S₁ and S₂, d(S₁,S₂), is minimized. Here, d(S₁,S₂) is a very general term, which means it could be any genome rearrangement distance (like reversal, transposition, and tandem duplication, etc). We make the first step by showing that if d(S₁,S₂) is measured by the breakpoint distance then the problem is polynomially solvable. We expect that this will trigger some related research along the line in the near future.

Cite as

Manuel Lafond, Binhai Zhu, and Peng Zou. Genomic Problems Involving Copy Number Profiles: Complexity and Algorithms. In 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 161, pp. 22:1-22:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.CPM.2020.22,
  author =	{Lafond, Manuel and Zhu, Binhai and Zou, Peng},
  title =	{{Genomic Problems Involving Copy Number Profiles: Complexity and Algorithms}},
  booktitle =	{31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020)},
  pages =	{22:1--22:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-149-8},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{161},
  editor =	{G{\o}rtz, Inge Li and Weimann, Oren},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2020.22},
  URN =		{urn:nbn:de:0030-drops-121471},
  doi =		{10.4230/LIPIcs.CPM.2020.22},
  annote =	{Keywords: Computational genomics, cancer genomics, copy number profiles, NP-hardness, approximation algorithms, FPT algorithms}
}
Document
The Tandem Duplication Distance Is NP-Hard

Authors: Manuel Lafond, Binhai Zhu, and Peng Zou

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
In computational biology, tandem duplication is an important biological phenomenon which can occur either at the genome or at the DNA level. A tandem duplication takes a copy of a genome segment and inserts it right after the segment - this can be represented as the string operation AXB ⇒ AXXB. Tandem exon duplications have been found in many species such as human, fly or worm, and have been largely studied in computational biology. The Tandem Duplication (TD) distance problem we investigate in this paper is defined as follows: given two strings S and T over the same alphabet, compute the smallest sequence of tandem duplications required to convert S to T. The natural question of whether the TD distance can be computed in polynomial time was posed in 2004 by Leupold et al. and had remained open, despite the fact that tandem duplications have received much attention ever since. In this paper, we prove that this problem is NP-hard, settling the 16-year old open problem. We further show that this hardness holds even if all characters of S are distinct. This is known as the exemplar TD distance, which is of special relevance in bioinformatics. One of the tools we develop for the reduction is a new problem called the Cost-Effective Subgraph, for which we obtain W[1]-hardness results that might be of independent interest. We finally show that computing the exemplar TD distance between S and T is fixed-parameter tractable. Our results open the door to many other questions, and we conclude with several open problems.

Cite as

Manuel Lafond, Binhai Zhu, and Peng Zou. The Tandem Duplication Distance Is NP-Hard. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 15:1-15:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.STACS.2020.15,
  author =	{Lafond, Manuel and Zhu, Binhai and Zou, Peng},
  title =	{{The Tandem Duplication Distance Is NP-Hard}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{15:1--15:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.15},
  URN =		{urn:nbn:de:0030-drops-118769},
  doi =		{10.4230/LIPIcs.STACS.2020.15},
  annote =	{Keywords: Tandem duplication, Text processing, Formal languages, Computational genomics, FPT algorithms}
}
Document
Track C: Foundations of Networks and Multi-Agent Systems: Models, Algorithms and Information Management
Energy Consumption of Group Search on a Line

Authors: Jurek Czyzowicz, Konstantinos Georgiou, Ryan Killick, Evangelos Kranakis, Danny Krizanc, Manuel Lafond, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
Consider two robots that start at the origin of the infinite line in search of an exit at an unknown location on the line. The robots can collaborate in the search, but can only communicate if they arrive at the same location at exactly the same time, i.e. they use the so-called face-to-face communication model. The group search time is defined as the worst-case time as a function of d, the distance of the exit from the origin, when both robots can reach the exit. It has long been known that for a single robot traveling at unit speed, the search time is at least 9d - o(d); a simple doubling strategy achieves this time bound. It was shown recently in [Chrobak et al., 2015] that k >= 2 robots traveling at unit speed also require at least 9d group search time. We investigate energy-time trade-offs in group search by two robots, where the energy loss experienced by a robot traveling a distance x at constant speed s is given by s^2 x, as motivated by energy consumption models in physics and engineering. Specifically, we consider the problem of minimizing the total energy used by the robots, under the constraints that the search time is at most a multiple c of the distance d and the speed of the robots is bounded by b. Motivation for this study is that for the case when robots must complete the search in 9d time with maximum speed one (b=1; c=9), a single robot requires at least 9d energy, while for two robots, all previously proposed algorithms consume at least 28d/3 energy. When the robots have bounded memory and can use only a constant number of fixed speeds, we generalize an algorithm described in [Baeza-Yates and Schott, 1995; Chrobak et al., 2015] to obtain a family of algorithms parametrized by pairs of b,c values that can solve the problem for the entire spectrum of these pairs for which the problem is solvable. In particular, for each such pair, we determine optimal (and in some cases nearly optimal) algorithms inducing the lowest possible energy consumption. We also propose a novel search algorithm that simultaneously achieves search time 9d and consumes energy 8.42588d. Our result shows that two robots can search on the line in optimal time 9d while consuming less total energy than a single robot within the same search time. Our algorithm uses robots that have unbounded memory, and a finite number of dynamically computed speeds. It can be generalized for any c, b with cb=9, and consumes energy 8.42588b^2d.

Cite as

Jurek Czyzowicz, Konstantinos Georgiou, Ryan Killick, Evangelos Kranakis, Danny Krizanc, Manuel Lafond, Lata Narayanan, Jaroslav Opatrny, and Sunil Shende. Energy Consumption of Group Search on a Line. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 137:1-137:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{czyzowicz_et_al:LIPIcs.ICALP.2019.137,
  author =	{Czyzowicz, Jurek and Georgiou, Konstantinos and Killick, Ryan and Kranakis, Evangelos and Krizanc, Danny and Lafond, Manuel and Narayanan, Lata and Opatrny, Jaroslav and Shende, Sunil},
  title =	{{Energy Consumption of Group Search on a Line}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{137:1--137:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.137},
  URN =		{urn:nbn:de:0030-drops-107138},
  doi =		{10.4230/LIPIcs.ICALP.2019.137},
  annote =	{Keywords: Evacuation, Exit, Line, Face-to-face Communication, Robots, Search}
}
Document
Reconciling Multiple Genes Trees via Segmental Duplications and Losses

Authors: Riccardo Dondi, Manuel Lafond, and Celine Scornavacca

Published in: LIPIcs, Volume 113, 18th International Workshop on Algorithms in Bioinformatics (WABI 2018)


Abstract
Reconciling gene trees with a species tree is a fundamental problem to understand the evolution of gene families. Many existing approaches reconcile each gene tree independently. However, it is well-known that the evolution of gene families is interconnected. In this paper, we extend a previous approach to reconcile a set of gene trees with a species tree based on segmental macro-evolutionary events, where segmental duplication events and losses are associated with cost delta and lambda, respectively. We show that the problem is polynomial-time solvable when delta <= lambda (via LCA-mapping), while if delta > lambda the problem is NP-hard, even when lambda = 0 and a single gene tree is given, solving a long standing open problem on the complexity of the reconciliation problem. On the positive side, we give a fixed-parameter algorithm for the problem, where the parameters are delta/lambda and the number d of segmental duplications, of time complexity O(ceil[delta/lambda]^d * n * delta/lambda). Finally, we demonstrate the usefulness of this algorithm on two previously studied real datasets: we first show that our method can be used to confirm or refute hypothetical segmental duplications on a set of 16 eukaryotes, then show how we can detect whole genome duplications in yeast genomes.

Cite as

Riccardo Dondi, Manuel Lafond, and Celine Scornavacca. Reconciling Multiple Genes Trees via Segmental Duplications and Losses. In 18th International Workshop on Algorithms in Bioinformatics (WABI 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 113, pp. 5:1-5:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{dondi_et_al:LIPIcs.WABI.2018.5,
  author =	{Dondi, Riccardo and Lafond, Manuel and Scornavacca, Celine},
  title =	{{Reconciling Multiple Genes Trees via Segmental Duplications and Losses}},
  booktitle =	{18th International Workshop on Algorithms in Bioinformatics (WABI 2018)},
  pages =	{5:1--5:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-082-8},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{113},
  editor =	{Parida, Laxmi and Ukkonen, Esko},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.WABI.2018.5},
  URN =		{urn:nbn:de:0030-drops-93079},
  doi =		{10.4230/LIPIcs.WABI.2018.5},
  annote =	{Keywords: Gene trees/species tree reconciliation, phylogenetics, computational complexity, fixed-parameter algorithms}
}
Document
The complexity of speedrunning video games

Authors: Manuel Lafond

Published in: LIPIcs, Volume 100, 9th International Conference on Fun with Algorithms (FUN 2018)


Abstract
Speedrunning is a popular activity in which the goal is to finish a video game as fast as possible. Players around the world spend hours each day on live stream, perfecting their skills to achieve a world record in well-known games such as Super Mario Bros, Castlevania or Mega Man. But human execution is not the only factor in a successful speed run. Some common techniques such as damage boosting or routing require careful planning to optimize time gains. In this paper, we show that optimizing these mechanics is in fact a profound algorithmic problem, as they lead to novel generalizations of the well-known NP-hard knapsack and feedback arc set problems. We show that the problem of finding the optimal damage boosting locations in a game admits an FPTAS and is FPT in k + r, the number k of enemy types in the game and r the number of health refill locations. However, if the player is allowed to lose a life to regain health, the problem becomes hard to approximate within a factor 1/2 but admits a (1/2 - epsilon)-approximation with two lives. Damage boosting can also be solved in pseudo-polynomial time. As for routing, we show various hardness results, including W[2]-hardness in the time lost in a game, even on bounded treewidth stage graphs. On the positive side, we exhibit an FPT algorithm for stage graphs of bounded treewidth and bounded in-degree.

Cite as

Manuel Lafond. The complexity of speedrunning video games. In 9th International Conference on Fun with Algorithms (FUN 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 100, pp. 27:1-27:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{lafond:LIPIcs.FUN.2018.27,
  author =	{Lafond, Manuel},
  title =	{{The complexity of speedrunning video games}},
  booktitle =	{9th International Conference on Fun with Algorithms (FUN 2018)},
  pages =	{27:1--27:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-067-5},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{100},
  editor =	{Ito, Hiro and Leonardi, Stefano and Pagli, Linda and Prencipe, Giuseppe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FUN.2018.27},
  URN =		{urn:nbn:de:0030-drops-88187},
  doi =		{10.4230/LIPIcs.FUN.2018.27},
  annote =	{Keywords: Approximation algorithms, parameterized complexity, video games, knapsack, feedback arc set}
}
Document
On the Weighted Quartet Consensus Problem

Authors: Manuel Lafond and Celine Scornavacca

Published in: LIPIcs, Volume 78, 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)


Abstract
In phylogenetics, the consensus problem consists in summarizing a set of phylogenetic trees that all classify the same set of species into a single tree. Several definitions of consensus exist in the literature; in this paper we focus on the Weighted Quartet Consensus problem, a problem with unknown complexity status so far. Here we prove that the Weighted Quartet Consensus problem is NP-hard and we give a 1/2-factor approximation for this problem. During the process, we propose a derandomization procedure of a previously known randomized 1/3-factor approximation. We also investigate the fixed-parameter tractability of this problem.

Cite as

Manuel Lafond and Celine Scornavacca. On the Weighted Quartet Consensus Problem. In 28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 78, pp. 28:1-28:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.CPM.2017.28,
  author =	{Lafond, Manuel and Scornavacca, Celine},
  title =	{{On the Weighted Quartet Consensus Problem}},
  booktitle =	{28th Annual Symposium on Combinatorial Pattern Matching (CPM 2017)},
  pages =	{28:1--28:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-039-2},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{78},
  editor =	{K\"{a}rkk\"{a}inen, Juha and Radoszewski, Jakub and Rytter, Wojciech},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2017.28},
  URN =		{urn:nbn:de:0030-drops-73284},
  doi =		{10.4230/LIPIcs.CPM.2017.28},
  annote =	{Keywords: phylogenetic tree, consensus tree, quartets, complexity, fixed-parameter tractability}
}
Document
Efficient Non-Binary Gene Tree Resolution with Weighted Reconciliation Cost

Authors: Manuel Lafond, Emmanuel Noutahi, and Nadia El-Mabrouk

Published in: LIPIcs, Volume 54, 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)


Abstract
Polytomies in gene trees are multifurcated nodes corresponding to unresolved parts of the tree, usually due to insufficient differentiation between sequences of homologous gene copies. Apart from gene sequences, other information such as that contained in the species tree can be used to resolve such intricate parts of a gene tree. The problem of resolving a multifurcated tree has been considered by many authors, the objective function often being the number of duplications and losses reflected by the reconciliation of the resolved gene tree with the species tree. Here, we present PolytomySolver, an algorithm accounting for a more general model allowing different costs for duplications and losses per species. The time complexity of this algorithm is linear for the unit cost and is quadratic for the general cost, which outperforms the best known solutions so far by a linear factor. We show on simulated trees that the gain in theoretical complexity has a real practical impact on running times.

Cite as

Manuel Lafond, Emmanuel Noutahi, and Nadia El-Mabrouk. Efficient Non-Binary Gene Tree Resolution with Weighted Reconciliation Cost. In 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 54, pp. 14:1-14:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{lafond_et_al:LIPIcs.CPM.2016.14,
  author =	{Lafond, Manuel and Noutahi, Emmanuel and El-Mabrouk, Nadia},
  title =	{{Efficient Non-Binary Gene Tree Resolution with Weighted Reconciliation Cost}},
  booktitle =	{27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016)},
  pages =	{14:1--14:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-012-5},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{54},
  editor =	{Grossi, Roberto and Lewenstein, Moshe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CPM.2016.14},
  URN =		{urn:nbn:de:0030-drops-60907},
  doi =		{10.4230/LIPIcs.CPM.2016.14},
  annote =	{Keywords: gene tree, polytomy, reconciliation, resolution, weighted cost, phylogeny}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail