> Simple: logic circuits for propositional logic are not Turing-complete, and in a lecture about theoretical computer science one wants to teach such more sophisticated, mathematical models of computation.
Logical circuits with delay (sequential [1] rather than combinational logic) are indeed not Turing complete -- but in the same uninteresting sense that a CPU is not Turing complete: In a conceptually irrelevant way. As I argued, Turing machines are precisely not "much more sophisticated" than CPUs or the circuits they are made out of. Turing machines merely have infinite rather than potentially infinite memory.
You could modify the concept of a Turing machine such that its tape is potentially rather than actually infinite, and it would change nothing of substance. Yet this machine would suddenly only be considered equivalent to a finite-state machine.
Apart from that, a perhaps theoretically interesting point about combinational logic (circuits without time/delay) and finite state machines would be that combinational logic is apparently computationally weaker than finite-state machines. Wikipedia doesn't say anything about that explicitly, but the "classes of automata" image [2] they include in the article does suggest it. But that, assuming it's true, should be taught in theoretical computer science class, not in computer engineering (if the latter mentions it at all).
But, much more importantly, they -- the theoretical computer science professors, not the engineering guys -- should explain the theoretical relation between circuits with delay (sequential logic) and various automata, including Turing machines.
The likely fact is that they ignore circuits because they don't fit well into their supposedly neat picture of abstract machines. In fact, they call their reliance on infinity as an important difference between models of computation in question.
Logical circuits with delay (sequential [1] rather than combinational logic) are indeed not Turing complete -- but in the same uninteresting sense that a CPU is not Turing complete: In a conceptually irrelevant way. As I argued, Turing machines are precisely not "much more sophisticated" than CPUs or the circuits they are made out of. Turing machines merely have infinite rather than potentially infinite memory.
You could modify the concept of a Turing machine such that its tape is potentially rather than actually infinite, and it would change nothing of substance. Yet this machine would suddenly only be considered equivalent to a finite-state machine.
Apart from that, a perhaps theoretically interesting point about combinational logic (circuits without time/delay) and finite state machines would be that combinational logic is apparently computationally weaker than finite-state machines. Wikipedia doesn't say anything about that explicitly, but the "classes of automata" image [2] they include in the article does suggest it. But that, assuming it's true, should be taught in theoretical computer science class, not in computer engineering (if the latter mentions it at all).
But, much more importantly, they -- the theoretical computer science professors, not the engineering guys -- should explain the theoretical relation between circuits with delay (sequential logic) and various automata, including Turing machines.
The likely fact is that they ignore circuits because they don't fit well into their supposedly neat picture of abstract machines. In fact, they call their reliance on infinity as an important difference between models of computation in question.
[1] https://en.wikipedia.org/wiki/Sequential_logic
[2] https://en.wikipedia.org/wiki/File:Automata_theory.svg