Don Estep will be visiting the CTL group in March 20th and will give a seminar on that day. The topic is “Stochastic Inverse Problems for Parameter Determination”. See below for an abstract. The seminar is between 14-15 in the Visualisation Studio on the fourth floor in the D-building.
Abstract. A mathematical model of a physical system determines a map between the parameter and data values characterizing the properties of a particular system and the output quantities describing the behavior of the system. In many cases, we can make observations of the behavior of the system, but the parameter and data values cannot be observed directly. This raises the inverse problem of determining the possible parameter/data values that correspond to given observations. A defining characteristic of this inverse problem is that the solutions are set-valued or equivalence classes, since in general multiple parameter values can yield the same output value. Moreover, since observation data generally has a stochastic nature, the solution of the inverse problem is described as a probability measure.
We describe recent work on the formulation, solution, and uncertainty quantification for the parameter identification problem. This new approach has two stages: a systematic way to approximate set-valued solutions of inverse problems and the use of measure theoretic techniques for approximating probability distributions in parameter space. We also carry out an error analysis for uncertainty quantification. We will also describe some current work and the relation to other inverse problems such as data assimilation.
The president of KTH has decided to renew Anders Ynnerman’s affiliated professorship in scientific visualisation at the school of CSC until the end of 2016.
Previously, professor Ynnerman was affiliated with CSC between 2007 and 2011, in the same area. Anders Ynnerman is the director of the Visualization Center C in Linköping and is a professor in scientific visualisation at Linköping University, where he has held a chair since 1999. He is also one of the co-founders of the Center for Medical Image Science and Visualization (CMIV).
You can find the archived stream here.
During the open house for Advanced Graphics and Interaction for fall 2013, we streamed the event live. The open house took place on Thursday December 5 between 17:00-19:00 CET. The event was well visited and showcased many good student projects, and if you can still try the demos yourself in the Visualisation studio.
On November 13, at 9:00, there will be another entry in the HPCViz seminar series, once again held in the visualization studio (room 4450, fourth floor of the D-building). Laeeq Ahmed will talk about Parallel Virtual Screening using MapReduce. It is a one hour lecture.
Drug discovery is the process of screening a large number of chemical libraries to find new medicines. Due to the huge size of chemical libraries, traditional screening is time-consuming and costly. With advancements in computer technology, Virtual screening is performed using machine learning techniques for filtering large collection of chemical structures. Support-vector-machine (SVM) is one of the most famous machine learning techniques for classification and regression analysis. In this work we developed a parallel version of SVM based virtual screening using iterative MapReduce programming model Spark, to further reduce the filtering time and thus the cost. I will first introduce Spark and its usage in the cluster environment and later discuss the case study of parallel SVM based virtual screening.
On September 23 there will be a guest lecture in the HPCViz seminar series, held in the D3 lecture hall between 11:00-12:00. We will be visited by John Wilkes of Google, who will speak on cluster management at Google.
Cluster management at Google
Cluster management is the term that Google uses to describe how we control the computing infrastructure in our datacenters that supports almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. My goal is to present an overview of some of these systems, introduce Omega, the new cluster-manager tool we are building, and present some of the challenges that we’re facing along the way. Many of these challenges represent research opportunities, so I’ll spend the majority of the time discussing those.
John Wilkes has been at Google since 2008, where he is working on cluster management and infrastructure services. He is interested in far too many aspects of distributed systems, but a recurring theme has been technologies that allow systems to manage themselves. In his spare time he continues, stubbornly, trying to learn how to blow glass.
John will be around for individual discussions after the talk – if interested please inform firstname.lastname@example.org
Mikael Vejdemo Johansson will give part two of his seminar on “Topological Data Analysis – applying homology in medicine, robotics, sensor networks, and graphics”, see abstract from part one here.
We will continue to look at the foundations of these generalized applied topology methods in some detail, and see how they have been applied in the past.
Xavi Aguilar will hold his seminar, “Parallel Performance: tools and techniques”
Abstract: HPC systems are becoming more heterogeneous, with higher levels of concurrency and energy constraints. In such scenarios, the gap between system peak performance and real application performance is widening due to the complexity of the platform and its programmability. Thus, the use of tools and runtime systems plays a major role to achieve good application performance and scalability.
In this seminar we will have an overview of the state of the art in performance analysis together with a success story from the Scalalife project. There, a performance analysis study was used to re-design a chemistry application (DALTON), obtaining a considerably increase in its scalability.
We will also look at different issues that tools face to reach the exascale era and the current research performed within the department in performance analysis.
Christopher Peters, HPCViz
Title: Interacting with Virtual Embodied Agents: Computation and Evaluation
Mikael Vejdemo Johansson, CVAP
Title: Topological Data Analysis – applying homology in medicine, robotics, sensor networks, and graphics
Abstract: In recent decades, computation and data analysis techniques have matured to the point where our entire society runs on machine learning and data analysis – the commercials you see are generated from analyses of your shopping behavior, your travel is optimized with data collected from past travel intensities, the medical care you receive is optimized by data-intensive studies of various kinds. Data becomes available at high volume and high speed, and techniques to deal with data grow by leaps and bounds.
In the past decade, various approaches to data analysis rooted in algebraic topology have gained traction as a vital research field.
Already clustering – a powerful family of methods with ubiquitous application – is an essentially topological technique, and generalizations are increasingly useful.
We shall look at the foundations of these generalized applied topology methods in some detail, and see how they have been applied in the past.
Along the journey, we shall meet classifications of breast cancer types, statistics of naturally occurring images, approaches to understanding how languages encode color, and methods for understanding periodicity in complex systems.
Here you can find some basic information on the visualisation studio at KTH (the VIC studio). If you have questions, feel free to contact us.