QFAM – Modelling – Conference in Vienna

Dear members of the QFAM,

We would like to point out that the conference “Modelling”, which will take place in Vienna next year and which we are organising has extended its deadline for submission until 13.10.2019.

This also has to do with the fact that too few submissions were received from our ranks. It is probably not necessary to remind you that the conference is not only suitable for our young scientists to publish papers in the field of modelling, but also serves to form and deepen the community and to form a certain science policy opinion, especially for us and our young scientists.

It would therefore be helpful if you could think about keeping the community alive at the conference by submitting papers at short notice.

Best regards,
Bernhard Rumpe

New data analysis project RIPET

The DFKI reaches order from the tec4you engineering company (the sister company of the DFKI-SiFa project partner, tec4U – Lösungen verbinden: Beratung, Software und Datenservice aus einer Hand ) for data analysis. Special thanks go to Nijat Mehdiyev for his commitment.

Forty-two countries adopt new OECD Principles on Artificial Intelligence

OECD and partner countries formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI) today, agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy. The OECD’s 36 member countries, along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, signed up to the OECD Principles on Artificial Intelligence at the Organisation’s annual Ministerial Council Meeting, taking place today and tomorrow in Paris and focused this year on “Harnessing the Digital Transition for Sustainable Development”. Elaborated with guidance from an expert group formed by more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions, the Principles comprise five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation. They aim to guide governments, organisations and individuals in designing and running AI systems in a way that puts people’s best interests first and ensuring that AI system designers and operators are held accountable for their proper functioning. “Artificial Intelligence is revolutionising the way we live and work, and offering extraordinary benefits for our societies and economies. Yet, it raises new challenges and is also fuelling anxieties and ethical concerns. This puts the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount,” said OECD Secretary-General Angel Gurría. “These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.” (Read the full speech.) The AI Principles have the backing of the European Commission, whose high-level expert group has produced Ethics Guidelines for Trustworthy AI, and they will be part of the discussion at the forthcoming G20 Leaders’ Summit in Japan. The OECD’s digital policy experts will build on the Principles in the months ahead to produce practical guidance for implementing them. While not legally binding, existing OECD Principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. For example, the OECD Privacy Guidelines, which set limits to the collection and use of personal data, underlie many privacy laws and frameworks in the United States, Europe and Asia. The G20-endorsed OECD Principles of Corporate Governance have become an international benchmark for policy makers, investors, companies and other stakeholders working on institutional and regulatory frameworks for corporate governance. 

The AI Principles can be downloaded in full at this link. In summary, they state that: 
1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. 

The OECD recommends that governments: 
· Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
· Foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge.
· Create a policy environment that will open the way to deployment of trustworthy AI systems.
· Equip people with the skills for AI and support workers to ensure a fair transition.
· Co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.

For further details, journalists are invited to contact Catherine Bremer in the OECD Media Office (+33 1 45 24 80 97). More on the OECD’s work on Artificial Intelligence: www.oecd.org/going-digital/ai/Working with over 100 countries, the OECD is a global policy forum that promotes policies to improve the economic and social well-being of people around the world.

Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award

Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence


New York, NY, March 27, 2019 – ACM, the Association for Computing Machinery, today named Yoshua Bengio, Geoffrey Hinton, and Yann LeCun recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute, and University Professor Emeritus at the University of Toronto; and LeCun is Professor at New York University and VP and Chief AI Scientist, Facebook.

Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.

While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.

The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing. Bengio, Hinton and LeCun will formally receive the 2018 ACM A.M. Turing Award at ACM’s annual awards banquet on Saturday, June 15, 2019 in San Francisco, California.

“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”

“Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year’s Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor.”

Machine Learning, Neural Networks and Deep Learning
In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output such as the label “cat.” The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.

Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed “artificial neural networks” as a cornerstone of their machine learning investigations.

In computer science, the term “neural networks” refers to systems composed of layers of relatively simple computing elements called “neurons” that are simulated in a computer. These “neurons,” which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term “deep learning.”

The conceptual foundations and engineering advances laid by LeCun, Bengio and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition and machine translation.

Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton’s supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.

Bengio, Hinton and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of CIFAR, formerly known as the Canadian Institute for Advanced Research.

Select Technical Accomplishments
The technical achievements of this year’s Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following:

Geoffrey Hinton
Backpropagation: In a 1986 paper, “Learning Internal Representations by Error Propagation,” co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.

Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.

Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.

Yoshua Bengio
Probabilistic models of sequences: In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.

High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, “A Neural Probabilistic Language Model,” that introduced high-dimension word embeddings as a representation of word meaning. Bengio’s insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.

Generative adversarial networks: Since 2010, Bengio’s papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.

Yann LeCun
Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.

Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.

Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.

Biographical Background

Geoffrey Hinton
Geoffrey Hinton is VP and Engineering Fellow of Google, Chief Scientific Adviser of The Vector Institute and a University Professor Emeritus at the University of Toronto. Hinton received a Bachelor’s degree in experimental psychology from Cambridge University and a Doctoral degree in artificial intelligence from the University of Edinburgh. He was the founding Director of the Neural Computation and Adaptive Perception (later Learning in Machines and Brains) program at CIFAR.
Hinton’s honors include Companion of the Order of Canada (Canada’s highest honor), Fellow of the Royal Society (UK), foreign member of the National Academy of Engineering (US), the International Joint Conference on Artificial Intelligence (IJCAI) Award for Research Excellence, the NSERC Herzberg Gold medal, and the IEEE James Clerk Maxwell Gold medal. He was also selected by Wired magazine for “The Wired 100—2016’s Most Influential People” and by Bloomberg for the 50 people who changed the landscape of global business in 2017.

Yoshua Bengio
Yoshua Bengio is a Professor at the University of Montreal, and the Scientific Director of both Mila (Quebec’s Artificial Intelligence Institute) and IVADO (the Institute for Data Valorization). He is Co-director (with Yann LeCun) of CIFAR’s Learning in Machines and Brains program. Bengio received a Bachelor’s degree in electrical engineering, a Master’s degree in computer science and a Doctoral degree in computer science from McGill University.
Bengio’s honors include being named an Officer of the Order of Canada, Fellow of the Royal Society of Canada and the Marie-Victorin Prize. His work in founding and serving as Scientific Director of the Quebec Artificial Intelligence Institute (Mila) is also recognized as a major contribution to the field. Mila, an independent nonprofit organization, now counts 300 researchers and 35 faculty members among its ranks. It is the largest academic center for deep learning research in the world, and has helped put Montreal on the map as a vibrant AI ecosystem, with research labs from major companies as well as AI startups.

Yann LeCun
Yann LeCun is Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and VP and Chief AI Scientist at Facebook. He received a Diplôme d’Ingénieur from the Ecole Superieure d’Ingénieur en Electrotechnique et Electronique (ESIEE), and a PhD in computer science from Université Pierre et Marie Curie.
His honors include being a member of the US National Academy of Engineering; Doctorates Honoris Causa, from IPN Mexico and École Polytechnique Fédérale de Lausanne (EPFL); the Pender Award, University of Pennsylvania; the Holst Medal, Technical University of Eindhoven & Philips Labs; the Nokia-Bell Labs Shannon Luminary Award; the IEEE PAMI Distinguished Researcher Award; and the IEEE Neural Network Pioneer Award. He was also selected by Wired magazine for “The Wired 100—2016’s Most Influential People” and its “25 Geniuses Who are Creating the Future of Business.” LeCun was the founding director of the NYU Center of Data Science, and is a Co-director (with Yoshua Bengio) of CIFAR’s Learning in Machines and Brains program. LeCun is also a co-founder and former Member of the Board of the Partnership on AI, a group of companies and nonprofits studying the societal consequences of AI.


Call for Papers – CM-MAPS 2019

The 1st International Workshop on Conceptual Modeling for Multi-Agent Problem Solving will take place from August 10th to 12th, in combination with IJCAI 2019 in Macao, China.

You can see a list of the relevant topics of the workshop below:

* Conceptual artifacts to visualize and compose multi-agent problem
  solvers and their requirements
* Languages to specify and reason about high-level problem solving
* Cooperation frameworks among multiple agents to solve a problem
* Task allocation and quality and efficiency issues
* Workflow driven coordination to enact and deploy multi-agent problem
  solver
* Formalisms to bridge conceptual and formal models for decision and
  learning multi-agent AI systems
* Evolution and change management for multi-agent problem solving
* Agent Capability modeling
* Ontology modelling and specification for agent and problem solving

More information about the workshop can be found here.

IJCAI 2019 Workshop on Conceptual Modeling for Multi Agent Problem Solving

IJCAI 2019 Workshop on Conceptual Modeling for Multi Agent Problem Solving

This workshop is one result of the Dagstuhl Seminar 18471 on Next-generation Domain-Specific Modeling: Principles and Methods https://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=18471

Call for Papers
http://vienna.omilab.org/repo/files/CM-MAPS/CM-MAPS2019-CfP-v4.pdf (PDF-Download)

One of the initial aspects of artificial intelligence studied was problem-solving by autonomous systems. But it was quickly realized that many problems could not be solved by a single autonomous system, and in fact, a single system may not have a complete understanding of the complete problem being solved. Hence, with this as background, the area of cooperative problem solving, distributed artificial intelligence, and multi-agent systems as a comprehensive area have been established. The key point is that the original ideas of the problem are decomposed into sub-problems, agents (humans or systems) being assigned to work on sub-problems, and the need for cooperation among agents to solve the problem. And finally, establish a coordination framework to ensure that cooperation happens as intended, remain.

As AI is handling complex applications, like self-driving vehicles, to IoTs driven AI backed smart solutions, the core issues of the problem, sub-problems, cooperation, and coordination will surface, and become very relevant. In the implemented complex solutions, from IBM Watson’s based systems to a conversational agent, the problem is attempted to solve made up of multiple subproblems, and yet, the characterization of the subproblems and their interrelationship is not explicitly represented. The representation can be languages that glue the coordination among deployed sub-systems or higher level artifacts that convey the overall cooperation to solve the problem.

Conceptual modeling based on a conceptual model has constructs with a well-defined meaning, and an ontology to convey by language or visual diagram the concepts, their attributes, and interrelationships among the concepts. The area of knowledge representation also uses constructs to represent and model knowledge for comprehension and processing. The knowledge graphs are primitive knowledge representation techniques that do not capture the processing and enactment aspects of the problem being solved by a multi-agent system. Further, cooperative frameworks driven AI-based systems provide new capabilities, discover extended relationships among constructs and solve the problems in a novel and creative way. Thus, new computing paradigms are wanted to address the key issues and challenges in modeling and development of new generation multi-agent systems.

The aim of the workshop is to bring the conceptual modelers, requirements specifiers, multi-agent language specifiers, formal process modelers, and cooperative problem solvers to get together and open this area of research to help the designers and solution providers of large AI systems to visualize, comprehend, discuss, evolve, and enact the AI system. Our proposed ideas from the workshop can help deploy, manage, monitor and control large AI systems, and work towards efficient and qualitatively better problem-solving multi-agent AI systems. These large AI systems can be orchestrated by the execution of tasks orchestrated by events and coordinated by a workflow management system.

The key topics of interest for the workshop are:

  • Conceptual artifacts to visualize and compose multi-agent problem
    solvers and their requirements
  • Languages to specify and reason about high-level problem solving
  • Cooperation frameworks among multiple agents to solve a problem
  • Task allocation and quality and efficiency issues
  • Workflow driven coordination to enact and deploy multi-agent problem
    solver
  • Formalisms to bridge conceptual and formal models for decision and
    learning multi-agent AI systems
  • Evolution and change management for multi-agent problem solving
  • Agent Capability modeling
  • Ontology modelling and specification for agent and problem solving

Submission

Submitted papers must be formatted according to IJCAI guidelines (check https://www.ijcai.org/authors_kit). All contributions should be atmost six (6) pages, five (5) pages maximum for content, and one (1) page for references.

Submissions should only be made electronically as PDF documents via paper submission site: https://easychair.org/my/conference.cgi?conf=cmmaps2019

Important Dates

  • Apr 12, 2019: Deadline for submission of contributions to the workshop
  • May 10, 2019: Paper acceptance/rejection notification
  • May 24, 2019: Deadline for camera-ready paper versions
  • Aug 10-12, 2019: IJCAI 2019 Workshops

Organization
Organizing Committee:

Program Committee (tentative):

  • Robert Andrei Buchman, Babes Bolyal University Cluj Napoca, Romania,
  • Yi CAI, South China Univ of Technology
  • Peter Fettke, Deutsche Forschungszentrum für Künstliche Intelligenz,
    Germany
  • Aurona Gerber, University of Pretoria, South Africa
  • Knut Hinkelmann, FHNW Northwestern Switzerland, Switzerland
  • Julio Cesar Leite, PUC de Rio de Janeiro, Rio de Janeiro, Brazil,
  • Maurizio Lenzerini, Università di Roma, Italy
  • Qing Li, Poly U, HK
  • Wolfgang Maass, Saarland University, Germany
  • Heinrich C. Mayr, Alpen-Adria Universität Klagenfurt, Austria
  • John Mylopoulos, University of Toronto
  • Praveen Paruchuri, IIIT Hyderabad, India
  • David V. Pynadath, USC Institute for Creative Technologies, USA
  • Kurt Sandkuhl, University of Rostock, Germany
  • Hannes Schlieter, TU Dresden, Germany
  • Bernhard Thalheim, Christian-Albrechts-University Kiel, Germany
  • Isabelle Wattiau, ESSEC Business School, Paris, France
  • Manuel Wimmer, JKU Linz, Austria
  • Robert Woitsch, BOC Asset Management GmbH, Vienna, Austria
  • Takahira Yamaguchi, Keio University, Japan
  • Yan Liu Fiona, Poly U, HK
  • Xiao-Ming Wu, Poly U, HK
  • Shuai Li, Poly U, HK


Dr. Dominik Bork
University of Vienna
Faculty of Computer Science
Research Group Knowledge Engineering
Room: 4.19
Währinger Straße 29, 1090 Vienna
Phone: +43-1-4277-789 22
eFax: +43-1-4277-878922

Call for Papers – Special Issue in EMISAJ

Enterprise Modelling and Information Systems Architectures (EMISAJ) is a scholarly peer-reviewed open access journal with a unique focus on novel and innovative conceptual modeling research and its applications. We proudly announce that Prof. Dr. Peter Fettke is one of the editors for the special issue. It is titled “Blockchain Technologies in Enterprise Modeling and Enterprise Information Systems”. In this special issue, the editors are looking for high-quality articles that examine the use of block chains and the underlying technologies in enterprise modeling and enterprise information systems. More information about the journal itself can be found here and for more details about the special edition click here.

EMMSAD 2019 – CALL FOR PAPERS

The 24th International Conference on Exploring Modeling Methods of Systems Analysis and Development (EMMSAD’2019) will be held on June 3-4, 2019 co-located with CAiSE 2019 in Rome, Italy.

EMMSAD aims to provide a forum for researchers and practitioners interested in modeling methods for Systems Analysis and Development to meet and exchange research ideas and results.
Best paper will be awarded; all candidates of best paper award will be invited to submit enhanced versions for consideration in a special issue of Software and System Modelling (SoSyM) Journal.

The Call For Papers can be downloaded here.

IDS 2019 – Industrial Data Science Conference

Dear research partners of the AKKORD consortium,

We would like to bring you up to date on the current status of your application. After yesterday’s consultation with the project management agency, we were informed that all the necessary documents had been submitted, the sub-projects had been booked and that we would receive the grant notices at the end of January. Thus, nothing stands in the way of a project start at the beginning of April. In the further procedure we would like to concretise the planning of the kick-off, for this we will contact you as soon as possible.

In addition to the project-related contents, we would like to draw your attention to the event “IDS 2019 – Industrial Data Science Conference” on 13 March in Dortmund. This event is organized by our institute and the consortium partner RapidMiner and focuses on topics of industrial data analysis. Specifically, the focus will be on industrial applications of data science, success factors for data science projects, current research activities, data-based quality and process optimization, and strategic integration of data science in companies from a scientific and practical perspective. The topics therefore directly affect the AKKORD project and will certainly provide very good input for our joint research project. Against this background we would like to invite you to participate in the IDS Conference 2019.

Please find attached a flyer with further information on registration. In addition, the overview of speakers and lectures will be continuously updated at www.ids2019.industrial-data-science.de. In addition to numerous other experts from industry and research, Mrs. Ringeln (AKKORD project manager from Miele) will present an exciting use case from the field of industrial data analysis.

We would be very pleased to welcome you on site in Dortmund. If you have any questions regarding the IDS Conference or the further procedure of the AKKORD project, please do not hesitate to contact us.

Yours
AKKORD-IPS Team