Macro operators Revisited in Inductive Logic Programming
18 pages
English

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris

Macro operators Revisited in Inductive Logic Programming

-

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus
18 pages
English
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus

Description

Niveau: Supérieur, Doctorat, Bac+8
Macro-operators Revisited in Inductive Logic Programming Erick Alphonse MIG - INRA/UR1077 78352 Jouy en Josas CEDEX FRANCE Abstract. For the last ten years a lot of work has been devoted to propositionalization techniques in relational learning. These techniques change the representation of relational problems to attribute-value prob- lems in order to use well-known learning algorithms to solve them. Propo- sitionalization approaches have been successively applied to various prob- lems but are still considered as ad hoc techniques. In this paper, we study these techniques in the larger context of macro-operators as techniques to improve the heuristic search. The macro-operator paradigm enables us to propose a unified view of propositionalization and to discuss its current limitations. We show that a whole new class of approaches can be developed in relational learning which extends the idea of changes of representation to more suited learning languages. As a first step, we propose different languages that provide a better compromise than cur- rent propositionalization techniques between the building cost of macro- operators and the learning cost. It is known that ILP problems can be reformulated either into attribute-value or multi-instance problems. With the macro-operator approach, we see that we can target a new rep- resentation language we name multi-table. This new language is more expressive than attribute-value but less expressive than multi-instance.

  • relational learning

  • c21 c22

  • c11 c12

  • space language

  • language used

  • ilp problem

  • ilp


Sujets

Informations

Publié par
Nombre de lectures 15
Langue English

Extrait

1
Macrooperators Revisited in Inductive Logic Programming
ÉrickAlphonse
MIG  INRA/UR1077 78352 Jouy en Josas CEDEX FRANCE ealphons@jouy.inra.fr
Abstract.For the last ten years a lot of work has been devoted to propositionalization techniques in relational learning. These techniques change the representation of relational problems to attributevalue prob lems in order to use wellknown learning algorithms to solve them. Propo sitionalization approaches have been successively applied to various prob lems but are still considered as ad hoc techniques. In this paper, we study these techniques in the larger context of macrooperators as techniques to improve the heuristic search. The macrooperator paradigm enables us to propose a unified view of propositionalization and to discuss its current limitations. We show that a whole new class of approaches can be developed in relational learning which extends the idea of changes of representation to more suited learning languages. As a first step, we propose different languages that provide a better compromise than cur rent propositionalization techniques between the building cost of macro operators and the learning cost. It is known that ILP problems can be reformulated either into attributevalue or multiinstance problems. With the macrooperator approach, we see that we can target a new rep resentation language we name multitable. This new language is more expressive than attributevalue but less expressive than multiinstance. Moreover, it is PAClearnable under weak constraints. Finally, we sug gest that relational learning can benefit from both the problem solving and the attributevalue learning community by focusing on the design of effective macrooperator approaches.
Introduction
After [1], concept learning is defined as search : given a hypothesis space defined a priori, identified by its representation language, find a hypothesis consistent with the learning data. This paper, relating concept learning to search in a space state, has enabled machine learning to integrate techniques from problem solv ing, operational research and combinatorics. The search is NPcomplete for a large variety of languages of interest (e.g. [2–4]) and heuristic search is crucial for efficiency. If heuristic search has been showed effective in attributevalue lan guages, it appeared early that learning in relational languages, known for more
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents