December 10th 2012, Sheraton Brussels.
The Industry and Government Track of the IEEE ICDM conference will bring together academics and practicioners to discuss data mining challenges and opportunities that are emerging in both industry and government. Issues that will be addressed include how to intelligently leverage novel data sources (e.g. social media data, networked data, textual data), taking into account issues as privacy, big data and heterogenous datasets, and the application of novel data mining algorithms.
|Monday, December 10|
|9.00||Keynote talk by James Fan (IBM Research)
Human vs. Machine: How Watson beat the all-time best Jeopardy champions
|10.30||Data mining lessons from half a century of credit scoring by Tony Van Gestel (Dexia)||10.30|
|11.00||Tectonic Shifts in Television Advertising by Brendan Kitts (PrecisionDemand)||11.00|
|11.30||Data mining for official statistics by Bart Buelens (Statistics Netherlands)||11.30|
|13.30||Keynote talk by Foster Provost (New York University)
Mining (Massive) Consumer Behavior Data for Marketing
|14.30||Data Mining Framework For Monitoring Nuclear Facilities by Ranga Raju Vatsavai, (Oak Ridge National Laboratory, US)||14.30|
|15.00||Distributed Big Advertiser Data Mining by Ashish Bindra (nPario)||15.00|
|16.00||Keynote talk by John Crombez (Belgian State Secretary for the Fight against Social and Tax Fraud)
Increased efficiency of fraud inspection through Data Mining
|17.00||Automation of prediction of rare events in big data: is it possible (today)? by Thierry Van de Merckt (BISide, Solvay Business School)||17.00|
|17.30||Big Data and Fraud detection in government and banking, Lessons learned so far. by Jerome Bryssinck (SAS Institute)||17.30|
Public institutions must achieve their mission with a maximum of effectiveness. That also applies to the inspection services within the framework of the fight against fraud. The means, which are already scarce to date and will be even more in the future, must be used in a more targeted way. This is possible. There is a treasure of information available and, in combination with appropriate human resources, the right technological tools and the right organizational design, the inspection services can increase considerably their effectiveness and their efficiency through Data Mining. The key to awareness and support both within the inspection services and among politicians is performance evaluation.
John Crombez is the Belgian State Secretary for the Fight against Social and Tax Fraud. He studied economics at the University of Ghent (Belgium) and statistical sciences at the University of Neuchâtel (Switzerland), followed by a doctorate in economics obtained at Ghent University. He started his academic career as a lecturer in the Department of Financial Economics at Ghent University (1996-2001). His political career began in 2003 (to 2005) as an advisor to the Deputy Prime Minister and Minister of Public Enterprises. Later he also served as Cabinet of the Deputy Prime Minister and Minister of Budget, secretary of the Socialist Group in the European Parliament, and leader of the Socialist group in the Flemish Parliament. On 6 December 2011 he was appointed by King Albert II appointed Secretary of State for the Fight against social and fiscal fraud in the federal government Di Rupo.
More information about John Crombez
The era of "big data" has brought marketers and advertisers the opportunity to base decisions on data-driven models fueled by massive data on consumer behavior. I will discuss several applications involving different sorts of consumer behavior data: web content visitation, mobile "geo-social" behavior, and financial transaction behavior. In each case, the consumer behavior can be represented as the edges of a bipartite graph between consumers and some other entity (e.g., web content, locations, payment receivers). Then the bipartite graph can be mined for exploratory and predictive modeling. I will show how mining these data can help with understanding consumer interests and with targeting offers and advertisements. For example, website content visitation does a remarkable job of representing consumers' interests, and fine-grained financial transaction data can improve offer targeting over best-practices modeling with socio-demographic data. I also will show clear evidence that with consumer behavior data, even at a large scale additional data actually improves predictive modeling significantly, providing support that big data is indeed a valuable strategic asset.
Foster Provost is Professor and NEC Faculty Fellow at the NYU Stern School of Business. He previously was Editor-in-Chief of the journal Machine Learning, Program Chair of the ACM KDD conference, and his research has been the basis for several NYC-based startups. Prof. Provost's research focuses on data science, especially predictive modeling based on consumer behavior data for marketing/advertising applications, and on the application of micro-outsourcing systems (e.g., Mechanical Turk) to enhance predictive modeling. Previously he studied predictive modeling for applications including fraud detection, counterterrorism, network diagnosis, and others. Prof. Provost's work has won (among others) IBM Faculty Awards, a President's Award at NYNEX Science and Technology, Best Paper awards at KDD, including the Best Industry Paper this year, and the 2009 INFORMS Design Science Award.
More information about Foster Provost
A computer system that can answer natural language questions over a broad range of knowledge with high accuracy and confidence has been envisioned by scientists and writers since the advent of computers themselves. Consider, for example, the "Computer" in Star Trek. The DeepQA project at IBM aims to take on this grand challenge by illustrating how the wide and growing accessibility of natural language content and the integration and advancement of Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation and Reasoning, and massively parallel computation can drive open-domain automatic Question Answering technology to a point where it clearly and consistently rivals the best human performance. In this talk, we will give an overview of the DeepQA technology and describe how it was used to build Watson, the computer system that won the Jeopardy Man vs. Machine challenge in February 2011. Watson's ability to process and analyze vast amounts of unstructured data has the potential to transform business intelligence, healthcare, customer support, enterprise knowledge management, social computing, science and government.
James Fan is a research staff member at IBM Research. His research interests include question answering, knowledge representation and reasoning, natural language processing and machine learning. James is currently working the DeepQA project which is advancing the state-of-the-art in automatic, open domain question answering technology. The DeepQA team is pushing question answering technology to levels of performance previously unseen and demonstrate the technology by playing Jeopardy! at the level of a human champion.
More information about James Fan