Jump to content

Talk:Backtracking

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Someone who's wrong on the internet (talk | contribs) at 00:18, 10 February 2024 (Assessment: banner shell, Systems (Rater)). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Plagiarism?

[edit]

I don't know who copied from whom, maybe both from a third party even. But this article is almost identical to the Wikipedia article: http://computer-engineering.science-tips.org/algorithms/fundamentals/backtracking.html Marcus 134.147.19.211

It looks to me like this version existed only after the Wikipedia version of similar wording. I'd bet dollars to donuts they copied the Wikipedia article. --Cheeser1 (talk) 05:20, 12 January 2008 (UTC)[reply]

Muddle Rationale

[edit]

The rationale expressed in this article is often muddled, internally inconsistent and possibly inconsistent with related material, e.g.

Consider the original text, before my paltry cleanup:

-- When backtracking is used in a constraint programming language, it has an added complication that the information stored about the constraints needs to be updated as well. In these languages, a simple depth first search is an adequate implementation technique, as it is in Planner and Prolog. In implementing backtracking, it is common to keep a variable trail, to record the changes in the values of a particular variable. A smart implementor will avoid creating a variable trail between two successive changes when there is no choice point, as the backtracking will erase all of the changes as a single operation. --

Re: The phrase 'has an added complication that the information stored' is clueless about what 'the information' is or who stores it or why it is being stored.

Similaryly, re: 'variable trail to record the changes' is totally unclear about the relationship of the 'variable trail' to the obvious state history that MUST be maintained in order to recover state when backing up.

I've cleaned up what I could, but this article needs work. Sorry, if this sounds harsh. Just trying to be specific about the confusion here. LarryLACa 11:49, 12 December 2005 (UTC)[reply]

One solution or all of them?

[edit]

This article is not clear about the goal of searching, but implicitly it seems that it assumes finding one solution to the search problem (without proving its uniqueness, even if it should be) suffices (or alternatively proving that none exist). This can be seen for instance in the pseudo-code. However, there are many cases where one wants to count the number of solutions, and this is when backtracking will be used inevitably, if only to show that a solution found is unique. Think of the n queens problem. I believe the term backtracking is also used for such problems, and think the wording should be adapted to allow for this case. Marc van Leeuwen (talk) 09:31, 27 May 2008 (UTC)[reply]

dependency-directed backtracking

[edit]

Is dependency-directed backtracking (see Stallman, Richard M (1977). "Forward Reasoning and Dependency-Directed Backtracking In a System for Computer-Aided Circuit analysis". Artificial Intelligence 9. pp. 135–196. {{cite web}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)) worth mentioning in this article and redirecting here? -- Beland (talk) 17:49, 17 August 2008 (UTC)[reply]

Someone is also asking if this is the same as backjumping on that article's talk page. -- Beland (talk) 17:51, 17 August 2008 (UTC)[reply]


possible new technique : turbobacktracking

[edit]

an algo that might "capture" the "profile" of the part of bactracking type solution might count more than one solution at a time... — Preceding unsigned comment added by 93.118.212.31 (talk) 17:59, 24 July 2011 (UTC)

to fight against software processing stall, any backtracking or turbobacktracking might use smart tests both on forward and backtrack 93.118.212.93 (talk) 08:24, 11 November 2012 (UTC)[reply]

Knuth and Cormen References

[edit]

Although both the Knuth and Cormen references are classic books in the field of algorithms, neither deal with backtracking. On Knuth's page about The Art of Computer Programming (http://www-cs-faculty.stanford.edu/~uno/taocp.html) Backtracking is set to be talked about in Volume 4, which has not yet been released. In Cormen et al. there is no index reference to backtracking. — Preceding unsigned comment added by Sness (talkcontribs) 04:25, 7 April 2014 (UTC)[reply]

Yes I have to agree, the citing (Knuth, TAoCP 1968) in the definition appears to be an entirely fictitious reference. Knuth only published volume 1 in 1968 and I cannot find the word "backtrack" anywhere in it. And as far as I can tell, it does not appear anywhere in TAoCP at the time that this citation was added to the article. In fact, Knuth only supplies a description in Volume 4, Fascicle 5B published in 2019 and the defintion here clearly is not based on that content. Therefore I am removing the citation. -- RBarryYoung (talk) 15:42, 27 December 2020 (UTC)[reply]

"most efficient"?

[edit]

(if not the most efficient[citation needed]) has been there since 2011. Isn't it time this should be removed? — Preceding unsigned comment added by 65.95.81.13 (talk) 01:25, 13 April 2014 (UTC)[reply]

confusing and unnecessary distinction

[edit]

I suggest a modification on your algorithm with four reasons:

1. I am afraid You make an unnecessary distinction between the first child and the others. Father creates the first and elder brother makes the next. The knowledge in reject(), accept() and first() is inherited from different creators. It is confusing.

2. Your interface of nodes makes possible to rewind the iteration of children. Backtrack definitely don't want that.

3. Your code can be shorter. Your interface of node can be simpler.

4. The domain of your backtrack search is restricted to tree but it could be a directed graph.

Please, check this out:

 procedure bt(c) 
   if reject(P,c) then return 
   if accept(P,c) then output(P,c) 
   while s ← next(P,c), s ≠ Λ do 
     bt(s)

Csaba Kelemen (talk) 13:32, 28 June 2015 (UTC)[reply]

algorithm

[edit]

Your algorithm is very clear. And I think constraints should appear in your algorithm. In many case, you need adapting your data at different level, this is where backtracking happened. So need add 'changeData/restoreData' to keep track or backtrack.

procedure bt(c)

 if reject(P,c) then return
 if accept(P,c) then output(P,c)
 // change some data struct for next level
 changeData(P,c)
 s ← first(P,c)
 while s ≠ Λ do
   bt(s)
   s ← next(P,s)
 // restore data struct for sibling
 restoreData(P,c)  — Preceding unsigned comment added by Moan1223 (talkcontribs) 13:35, 30 August 2016 (UTC)[reply] 

give the categories of the problem in backtracking — Preceding unsigned comment added by 61.3.4.209 (talk) 15:07, 27 November 2016 (UTC)[reply]

and or null

[edit]

The section on "Constraint satisfaction" uses the symbol "Λ" to mean both

   end of list

and

   logical "and".

Anybody feel strongly enough about this section to clean it up? Thanks. DavidHolmes0 (talk) 18:33, 23 March 2018 (UTC)[reply]

How is this a metaheuristic?

[edit]

The intro states: "Backtracking depends on user-given "black box procedures" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a metaheuristic rather than a specific algorithm – although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time."

This seems specious to me. It seems like meta-heuristics guide local search algorithms to find local minima, but backtracking simply describes a procedure for skipping exploration of unfeasible state space. The paragraph does not cite a source. I think the paragraph is unnecessary and should be deleted. Certainly, the second sentence seems incorrect and should be deleted.

I would do it myself, but that paragraph has been in the article since 2008, so I'm posting here first to make sure I'm not missing something. If someone else sees this and agrees with me, please delete it.

Not sure if the original author from 13(!) years ago still has any opinion on this, but if so, cc: User:Jorge_Stolfi --Macoroni (talk) 15:18, 27 July 2021 (UTC)[reply]

  • @Macoroni: Indeed there should be a reference supporting that explanation.
    However, I understand that "meta-heuristic" is just what that sentence says: a method for combining black-box heuristics in the hope of obtaining a better heuristic. And a "heuristic" is any procedure that is not guaranteed to solve the problem, but is either somewhat likely to find a solution, or will return something that is likely to be good enough for practical purposes.
    Meta-heuristics are most often used for optimization problems, but that is because those are the most common types of practical problems that have no algorithmic exact solution, and hence are usually attacked with heuristics -- in particular, with several heuristics combined with a meta-heuristic.
    However, heuristics -- and therefore meta-heuristics -- are also used to solve problems that are not optimization problems, e.g. finding a Hamiltonian path in a graph, checking whether two graphs are isomorphic, finding a formula for the integral of a given formula, etc..
    And, on the other hand, backtracking is often used to find maximum or minimum of functions. It deserves to be called a "heuristic", rather than an algorithm, because (1) the search is often stopped well before it has explored the whole state space, and is not guaranteed to find a solution before that; and (2) the loop that checks all possible extensions of a partial solution usually sorts them by some criterion that is not guaranteed to put the best choice first, but only the most promising ones, in some vague sense. Which is what most (meta)-heuristics do.
    Also, meta-heuristics are not intended to find local minima: on the contrary, a meta-heuristic typically is designed to generate more or less "random" candidates, that are expected to be worse than the best solution found so far, in order to overcome any tendency that the given heuristics may have to get trapped in local minima.
    Finally, in theory there is no distinction between "optimization" problems and other problems. For example, the problem of finding the factors of a large integer N can be viewed as that of maximizing the function f(x,y) that is 1 if x y = N and 0 otherwise. Conversely, the problem of finding the cheapest traveling salesman tour on n cities is equivalent (up to a factor of about log(n)) to that of determining whether there exists a tour with cost K or less.
    Thus I, for one, do not think that the statement is incorrect or misleading.
    --Jorge Stolfi (talk) 20:16, 27 July 2021 (UTC)[reply]
    • Thanks for responding to my assault on your contribution from 13 years ago! :P I appreciate the thorough response. I'm more convinced now that you could technically call backtracking a metaheuristic, but I still feel like backtracking is the least metaheuristicy of the other search algorithms: for example, A*, best-first search, even branch-and-bound all describe techniques that use heuristics to decide to explore some part of the search-space over some other part, yet none of those articles contain this disclaimer in their intro. So even if technically true, it probably benefits the article to remove or at least move that paragraph. For clarity, is the heuristic you're referring to in backtracking the "reject" procedure? If so, I'd argue that this wouldn't commonly be referred to as a heuristic, since it's not really giving an approximate answer, it's giving an exact answer to the question "is this partial solution feasible?" Macoroni (talk) 18:15, 28 July 2021 (UTC)[reply]
      • Hi! I don't feel strongly about this issue, but let me note that the Reject procedure is usually a heuristic, because it needs to be only "half-correct". It must return true ("reject") only when there is no valid extension, sure; but it is allowed to return false ("go on") even when there is no valid extension. The efficiency of the search will depend on how good the Reject procedure is at predicting certain failure.
        For example, when searching for a Hamiltonian path in a graph G, the Reject procedure could just check whether the end of the current partial path P has any unvisited neighbor in G. A smarter version of Reject could check also whether the graph G-P is still connected.
        The First and Next procedures too are heuristics, because the order of the extensions is arbitrary but may greatly affect the cost of finding a solution, or even of enumerating all solutions. They too can be smarter by excluding from the list of extensions any choices that, by their analysis, cannot lead to valid solutions. In the above example, the First and Next heuristics could skip any cut vertices of the graph G-P.
        Moreover, the Firts and Next examples could vary in the nature of the extensions. Still in the above examples, the extensions could be short partial paths in G-P to be concatenated with P, rather than just single vertices.
        All the best, --Jorge Stolfi (talk) 12:24, 7 August 2021 (UTC)[reply]