Documents pour «Inria»

A l’écoute du bruit – L’imagerie par corrélations croisées

Josselin Garnier

1h14min31

Les techniques d’imagerie classiques
utilisent des ondes pour sonder un milieu inconnu et sont employées pour
des applications médicales (échographie) ou géophysiques (séismologie)
par exemple.
Ces ondes sont émises par des réseaux de
sources et après propagation dans le milieu elles sont enregistrées
par des réseaux de récepteurs.

 

Ces techniques sont généralement mises
en défaut lorsqu’on les utilise dans des milieux diffusants contenant
des inhomogénéités aléatoires, car les signaux cohérents venant des
réflecteurs à imager et enregistrés par les réseaux de récepteurs sont
souvent noyés par les signaux incohérents venant des diffuseurs.

L’analyse stochastique et multi-échelles permet de comprendre la situation et de proposer des fonctions d’imagerie originales.

On verra que ces nouvelles fonctions
d’imagerie à base de corrélations croisées permettent d’exploiter des
signaux traversant des milieux trs complexes, et aussi d’utiliser des
signaux issus de sources de bruit ambiant.

La possibilité d’utiliser des sources opportunistes ou des sources
incontrôlées de bruit ambiant au lieu de sources actives contrôlées est
évidemment très excitante d’un point de vue théorique, car elle remet en
cause la distinction habituelle signal/bruit, et d’un point de vue
pratique, par exemple en séismologie, où la rareté des sources (les
séismes) et l’impossibilité de les contrôler a toujours été un facteur
limitant.

Distributed data structures and consistency criteria

Achour MOSTEFAOUI

39min44

Les systèmes distribués sont divers mais
peuvent être classés en deux classes principales, ceux à
communication par messages et ceux à mémoire partagée. Depuis
qu'il a été montré par Attiya et al. que les deux modèles sont
équivalents et que l'on peut émuler l'un au dessus de l'autre,
une littérature abondante a été produite sur l'émulation de
différentes structures de données partagées. Un point important
concerne la cohérence garantie par une implémentation d'un
objet donné. Un nombre important de critères de cohérence a
été proposé par différentes communautés (atomicité, PRAM,
cohérence causale, cohérence cache, cohérence à terme, etc.).
Le critère le plus intuitif étant la linéarisabilité. Une
implémentation linéarisable se comporte exactement comme si
l'objet était physiquement unique et accédé en concurrence. Ce
critère étant coûteux à mettre en oeuvre, des critères plus
faibles. L'un deux la cohérence séquentielle en est la plus
proche. Elle offre les mêmes garanties que la linéarisabilité à
un coût théoriquement plus faible. Cependant, ce critère n'est
pas composable. Dans la présentation présente, on propose une
mise en oeuvre d'une mémoire partagée séquentiellement
cohérente beaucoup moins coûteuse qu'une mémoire linéarisable
et on donne deux contextes d'utilisation très couramment
rencontrés où la cohérence séquentielle est composable.

A brief history of atomic commitment

Rachid GUERRAOUI

33min21

Cet exposé retrace brièvement l'historique de
l'algorithmique répartie à travers l'un de ses problèmes les plus fondamentaux:
la validation atomique.

Détecteurs de défaillance

Carole DELPORTE

44min56

Cet exposé fait un tour d'horizon sur les détecteurs
de défaillances, introduits par Chandra et Toueg en 1996.  Michel Raynal a
largement participé aux recherches sur ce sujet.

Indistinguishability, Duality and Necessary Conditions

Yoram MOSES

49min34

This talk discusss the most fundamental notion in distributed computing. I will
present and prove a universal property for distributed computing, and discuss
implications and applications.

 

Tasks, algorithms, and models of distributed computing with Michel (together with other colleagues)

Sergio RAJSBAUM

37min39

In this talk, I describe some of the work I have done
with Michel (and others) over a very productive period that started 15 years
ago, and hopefully will last for many more years, where we have published an
average of two conference papers and one journal paper per year, and have met
many times in Mexico, France and other places. I concentrate on our more recent
work on interval-linearizability, a notion we introduced to specify objects
more general than the usual sequentially specified objects. While tasks
explicitly state what might happen when a set of processes run concurrently,
objects only specify what happens when processes access the object
sequentially. Remarkably, these are two approaches that have largely remained
independent since the early days of the theory of distributed computing. I
describe the bridges we establish between these two paradigms, and our
discussions about what is a distributed problem, and what it means to solve it.

Folding Turing is hard but feasible

Nicolas Schabanel

1h10min13

We introduce and study the computational
power of Oritatami, a theoretical model to explore greedy molecular
folding, by which the molecule begins to fold before waiting the end of
its production. This model is inspired by our recent experimental work
demonstrating the construction of shapes at the nanoscale by folding an
RNA molecule during its transcription from an engineered sequence of
synthetic DNA. While predicting the most likely conformation is known to
be NP-complete in other models, Oritatami sequences fold optimally in
linear time. Although our model uses only a small subset of the
mechanisms known to be involved in molecular folding, we show that it is
capable of efficient universal computation, implying that any extension
of this model will have this property as well.

This result is the first principled construction in this research
direction, and motivated the development of a generic toolbox, easily
reusable in future work. One major challenge addressed by our design is
that choosing the interactions to get the folding to react to its
environment is NP-complete. Our techniques bypass this issue by allowing
some flexibility in the shapes, which allows to encode different
« functions » in different parts of the sequence (instead of using the
same part of the sequence). However, the number of possible interactions
between these functions remains quite large, and each interaction also
involves a small combinatorial optimisation problem. One of the major
challenges we faced was to decompose our programming into various levels
of abstraction, allowing to prove their correctness thoroughly in a
human readable/checkable form.

We hope this framework can be generalised to other discrete dynamical
systems, where proofs of such large objects are often difficult to get.

Joint work with Cody
Geary (Caltech), Pierre-Étienne Meunier (Tapdance, Inria Paris), et
Shinnosuke Seki (U. Electro-Communications, Tokyo)

Adding Concurrency to Smart Contracts

Maurice HERLIHY

44min08

Modern cryptocurrency systems, such as Ethereum,
permit complex financial transactions through scripts called *smart contracts*.
These smart contracts are executed many, many times, always without
concurrency. First, smart contracts are serially executed before being recorded
on the blockchain, and once recorded, they are serially re-executed whenever it
is necessary to check that the blockchain is correct. Such serial execution
limits system throughput and fails to exploit today's concurrent multicore and
cluster architectures.



This talk presents a novel way to permit miners and validators to execute smart
contracts in parallel. A party attempting to add a smart contract to the
blockchain executes multiple smart contracts speculatively, in parallel, thus
``discovering'' a serializable concurrent schedule for those transactions. This
schedule is captured and encoded as a parallel but deterministic fork-join
program used by later validators to re-execute those transactions concurrently
and securely.



Experiments show non-trivial speedup with even a small number of cores.



Joint work with Thomas Dickerson, Eric Koskinen, and Paul Gazzillo.

An extensionalist analysis of computer simulations

Franck VARENNE

1h01min52

The current spreading of multi-scale and multi-process integrative
simulations is challenging for the philosophy of science. What is the
epistemic function of such a simulation? What does it really show? On
what grounds? Is it an experimental design or a theoretical
construction? To contribute to these debates, I suggest analyzing the
ways computer simulations (CSs) are using symbols. To this end, I will
present an extensionalist analysis of CSs. This approach puts the
emphasis not on languages nor on models, but more precisely on symbols,
on their extensions, and on their various ways of referring to data or
theories. It shows that the chains of reference of symbols in a CS are
multiple and of different kinds, especially in integrative simulations.
Once these chains are recognized and understood, the associated
processes of calibration and validation can be further explained. It
finally becomes possible to explain the reasons why some complete
reduction of the role of CSs to classical epistemic functions such as
“experience”, “experiment”, or “theoretical argument” has become
doubtful.

1.9. McEliece Cryptosystem

Irene MARQUEZ-CORBELLA

05min35

This is the last session of the first week of this MOOC. We have already all the ingredients to talk about code-based cryptography. Recall that in 1976 Diffie and Hellman published their famous paper "New Directions in Cryptography", where they introduced public key cryptography providing a solution to the problem of key exchange. Mathematically speaking, public key cryptography considers the notion of one-way trapdoor function that is easy in one direction, hard in the reverse direction unless you have a special information called the trapdoor. The security of the most popular public key cryptosystems is based either on the hardness of factoring or the presumed intractability of the discrete log problem. Code-based cryptography is based on the following one-way trapdoor function. It is easy and fast to encode a message using linear transformations since it can be viewed as a matrix multiplication. It is hard to decode random linear code.  Recall that the general decoding problem was proven to be NP-complete in the late 1970s. And the trapdoor information is that there exists some families of codes that have efficient decoding algorithms. We have seen the generalized Reed-Solomon codes and the Goppa codes. McEliece presented, in 1978, the first public key cryptosystem based on error-correcting codes. The security of this scheme is based on two intractable problems: the hardness of decoding, or equivalently the problem of finding codewords of minimal support, and the problem of distinguishing a code with a prescribed structure from a random one.

Coccinelle: synergy between programming language research and the Linux kernel

Julia Lawall

58min39

The Linux kernel underlies all kinds of systems across the computing landscape, from embedded systems to supercomputers. Today, the number of developers contributing to a given Linux kernel version is almost 2000. These range from developers who have many years of experience and work on core parts of the kernel to new developers who contribute a single device driver or bug fix. This volume of contributors and range of experience levels implies that devising means to unambiguously communicate information about kernel properties across the code base is critical. In this talk, we will highlight one of the tools that we have developed in this direction, Coccinelle. Coccinelle provides a domain-specific, code-based language for describing bug finding rules and evolutions in C code. It has been used in the development of thousands of changes (commits) to the Linux kernel, since 2007. This talk will give an overview of Coccinelle, its impact on the Linux kernel, and the research projects that have been carried out in the Whisper team that it has enabled.  

Safety Verification of Deep Neural Networks

Marta Kwiatkowska

58min26

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing a novel automated verification framework for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques work directly with the network code and, in contrast to existing methods, can offer guarantees that adversarial examples are found if they exist. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples.