MaGiX@LiX 2011 Abstracts |

*Monday, 16h10–17h00*). The

*Thursday, 10h00–10h50*). Writing parallel programs is
not easy, and debugging them is usually a nightmare. To cope with
these difficulties, the *skeleton programming* approach uses a
set of predefined patterns for parallel computations. The skeletons
are higher order functional templates that describe the program
underlying parallelism.

*Thursday, 17h10–18h00*). With
multicore processors becoming commodity, how can computational
mathematics spend Moore's dividend? I will present some perspectives
based on the development of the

*Friday,
16h10–17h00*). It is well-known that obtaining efficient
algorithms for change of ordering of Gröbner bases of is crucial in polynomial system solving. This task is
classically tackled by linear algebra operations using the FGLM
algorithm. However, with recent progress on Gröbner bases
computations, this step turns out to be the bottleneck of the whole
solving process. Recently, Faugère and Mou proposed an
algorithm (

In this talk we report a high performance implementation of this
algorithm. Basic elementary steps were recoded to take advantage of
new features of recent Intel processors (SSE4 operations). Moreover,
the linear algebra library was upgraded to a multi-threaded
implementation. It out- performs by several order of magnitude the

*Monday,
14h50–15h40*).

*Tuesday,
16h10–17h00*).

*Thursday,
14h–14h50*). I will discuss aspects of the design of the

*Wednesday,
11h10–12h00*). TeXmacs is a free system for editing
technical documents available on

*Tuesday, 9h30–10h20*).

*Thursday,
10h50–11h40*). General purpose systems for computer algebra
such as

Since one major objective of the

*Friday, 10h50–11h40*). Currently, there exists a big
gap between formal computer-understandable mathematics and informal
mathematics, as written by humans. When looking more closely, there
are two important subproblems: making documents written by humans at
least syntactically understandable for computers, and the formal
verification of the actual mathematics in the documents. In our talk,
we will focus on the first problem, by discussing some of the new
semantic editing features which have been integrated in the GNU
TeXmacs mathematical text editor.

To go short, the basic idea is that the user enters formulas in a visually oriented (whence user friendly) manner. In the background, we continuously run a packrat parser, which attempts to convert (potentially incomplete) formulas into content markup. As long as all formulas remain sufficiently correct, the editor can then both operate on a visual or semantic level, independently of the low-level representation being used. A related topic, which will also be discussed, is the automatic correction of syntax errors in existing mathematical documents. In particular, the syntax corrector that we have implemented enables us to upgrade existing documents and test our parsing grammar on various books and papers from different sources.

*Tuesday,
10h50–11h40*). We will present the implementations of the
elementary operations with polynomials and matrices available in the
C++ libraries of

This work is in collaboration with J. van der Hoeven.

*Monday, 17h00–17h50*). The FreeMABSys project aims at
developing the

*Monday, 14h00–14h50*). The method of Taylor models
provides rigorous enclosures of functions over given domains, where
the bulk of the dependency is represented by a high order multivariate
Taylor polynomial, and the remaining error is enclosed by a remainder
bound. In this talk, we will discuss how to construct Taylor model
arithmetic on computers, which naturally includes integrations as a
part of arithmetic. Computations using Taylor models provide rigorous
results, and advantageous features of the method have shown to be able
to solve various practical problems that were unsolvable earlier. The
applications start from mere range bounding of functions, leading to
sophisticated rigorous global optimization, and especially the
fruitful is the use for rigorous solvers of differential equations.

*Friday, 9h30–10h00*). We discuss two specific
geometric algorithms in **realroot**",**shape**"

*Tuesday, 14h00–14h50*).
Parallel hardware architectures (multicores, graphics processing
units, etc.) and computer memory hierarchies (from processor registers
to hard disks via successive cache memories) impose an evolution of in
the design of scientific software. Algorithms for scientific
computation have often been designed with algebraic complexity as the
main complexity measure and with serial running time as the main
performance counter. On modern computers minimizing the number of
arithmetic operations is no longer the primary factor affecting
performance. Effective utilization of the underlying architecture
(reducing pipeline stalls, increasing the amount of instruction level
parallelism, better utilizing the memory hierarchy) can be much more
important.

This talk is an introduction to performance issues related to data locality and parallelism for matrix and polynomial arithmetic operations targeting multicore and GPU implementation. A first part will be dedicated to data locality independently of parallelism thus using serial algorithms and C programs as illustration. In a second part analyzing and optimizing multithreaded programs on multicores will be discussed. In the third part, we will switch to GPU specific issues then compare the implementation techniques of both architecture platforms.

*Wednesday, 10h00–10h50*).

*Friday, 14h00–14h50*).
Floating-point (FP) arithmetic was designed as a mere approximation to
real arithmetic. And yet, since the behaviour of each operation is
fully specified by the

*Thursday, 12h00–12h50*). Synchronous
languages were invented for programming embedded control software.
They have been used for the development of the most critical parts of
software in various domains, notably, avionics, power generation,
railways, circuits, etc.

The success of the original synchronous languages has spurred the development of new languages, based on the same principles, that increase modularity, expressiveness and that address new applications like real-time video streaming, latency-insensitive designs, large scale simulations, hybrid (continuous/discrete) systems.

*Tuesday, 11h40–12h30*). We will describe a new
algorithm for the computation of the topology of plane curves.

Such problems are usually solved by combining two kinds of algorithms: those using exact computations (resultants, Sturm sequences, etc.) and those using semi-numerical or numerical computations (approximations of roots, refinements of +isolation boxes).

The global efficiency of the related solver is a balance between symbolic computations (slow but giving a lot of certified informations) and numerical computations (fast but eventually not accurate or not certified) and is also a balance between the use of asymptotically fast basic operations and more classical ones.

In this lecture, we will present a new global algorithm and fully certified algorithm, including a new bivariate solver, and we will explain our algorithmic choices (exact vs semi-numerical) as well as some of our technical choices for the implementation (basic algorithms, external libraries, multi-thread variants).

*(Tuesday 14h50–15h40)*. Most of algorithmic fundamental
questions in real algebraic geometry can be solved by the so-called
Cylindrical Algebraic Decomposition algorithm whose complexity is
doubly exponential in the number of variables. During the last 20
years, tremendeous efforts have led to algorithms based on the
so-called critical point method yielding much better complexity
results. RAGlib is an attempt to put into practice these more recent
techniques by implementing algorithms sharing some of the geometric
ideas who have led to theoretical complexity breakthroughs.

After a historical perspective, algorithms will be reviewed and some applications that have been solved using RAGlib will be presented. Efficiency questions will also be discussed and new functionalities which will appear in the next release will be presented.

*Thursday, 14h50–15h40*). I will present

We are just restructuring the code which converts these parts into seperate libraries which may be used individually or together. Although this is "work in progress" this set of libraries may already be useful.

*Tuesday, 17h00–17h50*). Computations
on Euclidean lattices arise in a broad range of fields of computer
science and mathematics. A very common task consists in transforming
an arbitrary basis of given lattice into a “nicer-looking”
one, made of somewhat orthogonal vectors. This reduction process,
whose most famous instance is LLL, heavily relies on Gram-Schmidt
Orthogonalisations. One can significantly accelerate lattice reduction
by replacing the rational arithmetic traditionally used for the
underlying GSO computations by approximate floating-point arithmetic.
In this talk, I will elaborate on this mixed numeric-algebraic
approach, which has been implemented in the

The cornerstone of lattice algorithmics is the famous LLL reduction algorithm, which, despite being polynomial-time, remains somewhat slow. It can be significantly speeded up by replacing the exact rational arithmetic used for the underlying Gram-Schmidt computations, by approximate floating-point arithmetic.

A natural means of speeding up an algorithm consists in working on an approximation of the input with smaller bit-size and showing that the work performed on the approximation is relevant for the actual exact input. Unfortunately, a lattice basis that is close to an LLL-reduced basis may not be itself LLL-reduced.

*Friday,
17h00–17h50*). The Number Field Sieve is the leading
algorithm for factoring large integers, and has been used over the
past 20 years to establish many records in this area. The latest of
these records is the factorization or rsa768 in 2010 by a joint effort
of several teams worldwide. We discuss in this talk several
algorithmic aspects of NFS, which are necessary features of a
state-of-the art implementation like

*Monday, 11h40–12h30*). Interval analysis (IA) makes it
possible to make proven statements about sets, based on approximate,
floating-point computations. It may thus be possible to prove that the
set of all solutions of a given problem is empty, or a singleton, or
to bracket it between computable inner and outer approximations. The
first part of this talk will present examples of problems in which one
is interested in sets rather than in point solutions. These examples
are taken from robotics, robust control and compartmental modeling
(widely used in biology). The basics of IA will be recalled in a
second part. Algorithms will be briefly presented that can be used to
find all solutions of sets of nonlinear equations or inequalities, or
all optimal estimates of unknown parameters of models based on
experimental data. These models may be defined by uncertain nonlinear
ordinary differential equations for which no closed form solution is
available. The third and last part of the talk will be a return on the
introductory examples, to see what can be achieved and what cannot,
what are the advantages and limitations of IA compared to alternative
technics and what are the challenges that IA must face to become part
of the standard engineering tools.

*Thursday, 16h20–17h10*). It has now been two decades
since the work on

*Friday, 11h00–11h40*). Accurate
computer recognition of handwritten mathematics offers to provide a
natural interface for mathematical computing, document creation and
collaboration. Mathematical handwriting, however, provides a number of
challenges beyond what is required for the recognition of handwritten
natural languages. On one hand, it is usual to use symbols from a
range of different alphabets and there are many similar-looking
symbols. Mathematical notation is two-dimensional and size and
placement information is important. Additionally, there is no fixed
vocabulary of mathematical “words” that can be used to
disambiguate symbol sequences. On the other hand there are some
simplifications. For example, symbols do tend to be well-segmented.
With these characteristics, new methods of character recognition are
important for accurate handwritten mathematics input.

We present a geometric theory that we have found useful for recognizing mathematical symbols. Characters are represented as parametric curves approximated by certain truncated orthogonal series. This maps symbols to a low-dimensional vector space of series coefficients in which the Euclidean distance is closely related to the variational integral between two curves. This can be used to find similar symbols very efficiently. We describe some properties of mathematical handwriting data sets when mapped into this space and compare classification methods and their confidence measures. We also show how, by choosing the functional basis appropriately, the series coefficients can be computed in real-time, as the symbol is being written and, by using integral invariant functions, orientation-independent recognition is achieved. The beauty of this theory is that a single, coherent view provides several related geometric techniques that give a high recognition rate and that do not rely on peculiarities of the symbol set.

*Monday, 10h50–11h40*). Computable
Analysis studies all aspects of computability and complexity over
real-valued data. In 1955 A. Grzegorczyk and D. Lacombe proposed a
definition of computable real functions. Their idea became the basis
of the "representation approach" for computability in
Analysis, TTE (Type-2 Theory of Effectivity). TTE supplies a uniform
method for defining natural computability on a variety of spaces
considered in Analysis such as Euclidean space, spaces of continuous
real functions, open, closed or compact subsets of Euclidean space,
computable metric spaces, spaces of integrable functions, spaces of
probability measures, Sobolev spaces and spaces of distributions.
There are various other approaches for studying computability in
Analysis, but for this purpose, still TTE seems to be the most useful
one.

In TTE, computability of functions on the set of infinite sequences over a finite alphabet is defined explicitly. Then infinite sequences are used as “names” for “abstract” objects such as real numbers, continuous real functions etc. A function on the abstract objects is called computable, if it can be realized by a computable function on names.

In the talk some basic concepts of TTE are presented and illustrated by examples. Contents: Aims of computable analysis, The representation approach (realization), Computable topological spaces, Representations of subset spaces, Computable metric spaces, Computational complexity, Final remarks.

*Friday, 14h50–15h40*). The first public version of

© 2011 Joris van der Hoeven

This webpage is part of the