👨💻 about me home CV/Resume 🖊️ Contact Github LinkedIn I’m a Haskeller 🏆 Best of Haskell abp panda pp hCalc bl todo pwd TPG

🆕 **since December 2020**: Playing with the actor model in an embedded multicore context. C imperative components become C stream pure functions with no side effect ➡️ C low level programming with high level pure functional programming properties 🏆

📰 **Saturday 30. January 2021**: Playing with Pandoc Lua filters in Lua. panda is a lightweight alternative to abp.

🆕 **Sunday 24. May 2020**: Working at EasyMile for more than 4 years. Critical real-time software in C, simulation and monitoring in Haskell ➡️ perfect combo! It’s efficient and funny ;-)

🚌 And**we are recruiting!** Contact if you are interested in **Haskell** or **embedded softwares** (or both).

🚌 And

28 May 2020

Toy Parser Generator is a lexical and syntactic parser generator for Python. This generator was born from a simple statement: YACC is too complex to use in simple cases (calculators, configuration files, small programming languages, …).

TPG can very simply write parsers that are useful for most every day needs (even if it can’t make your coffee). With a very clear and simple syntax, you can write an attributed grammar that is translated into a recursive descendant parser. TPG generated code is very closed to the original grammar. This means that the parser works

likethe grammar. A grammar rule can be seen as a method of the parser class, symbols as method calls, attributes as method parameters and semantic values as return values. You can also add Python code directly into grammar rules and build abstract syntax trees while parsing.The first application of TPG is TPG itself. The first (not released) version of TPG has been written by hand then was used to generate next versions. Now TPG can generate itself.

For an up-to-date documentation, please read tpg.pdf.

Please let me know if you use TPG in one of your projects. I will add you in the list of projects using TPG.

Python 2.2 or newer is required. TPG works with both Python 2 and 3.

The lexical scanner uses Python regular expressions. The text is split before being parsed by the grammar rules.

TPG isn’t based on predictive algorithms with tables like LL(k). The main idea was instead to try every possible choices and to accept the first choice that match the input. So when a choice point is reached - say `A|B|C`

- the parser will first try to recognize `A`

. If this fails it will try `B`

and if necessary `C`

. So contrary to LL(k) parsers the order of the branches of choice points is very important for TPG. In fact this method has been inspired from Prolog DGC parsers. But remember that when a choice has been done, even if their are more possible choices, it can’t be undone (in Prolog it can). The text to be parsed has to be stored in a string in memory (backtracking is simpler this way). During the parsing, the current position is stored in internal TPG variables for all terminal and non-terminal symbols.

So we can say that TPG uses a sort of very limited backtracking.

This algorithm is easily implementable. Any rule is translated into a class method without having to compute a prediction table. The main drawbacks of this method is that you have to be careful when you write your grammar (as in Prolog).

This page presents a well known example: a calculator.

More detailed examples are given in the documentation of TPG.

```
#!/usr/bin/env python
import math
import operator
import string
import tpg
if tpg.__python__ == 3:
= operator.truediv
operator.div raw_input = input
def make_op(op):
return {
'+' : operator.add,
'-' : operator.sub,
'*' : operator.mul,
'/' : operator.div,
'%' : operator.mod,
'^' : lambda x,y:x**y,
'**' : lambda x,y:x**y,
'cos' : math.cos,
'sin' : math.sin,
'tan' : math.tan,
'acos': math.acos,
'asin': math.asin,
'atan': math.atan,
'sqr' : lambda x:x*x,
'sqrt': math.sqrt,
'abs' : abs,
'norm': lambda x,y:math.sqrt(x*x+y*y),
}[op]
class Calc(tpg.Parser, dict):
r"""
separator space '\s+' ;
token pow_op '\^|\*\*' $ make_op
token add_op '[+-]' $ make_op
token mul_op '[*/%]' $ make_op
token funct1 '(cos|sin|tan|acos|asin|atan|sqr|sqrt|abs)\b' $ make_op
token funct2 '(norm)\b' $ make_op
token real '(\d+\.\d*|\d*\.\d+)([eE][-+]?\d+)?|\d+[eE][-+]?\d+' $ float
token integer '\d+' $ int
token VarId '[a-zA-Z_]\w*' ;
START/e ->
'vars' $ e=self.mem()
| VarId/v '=' Expr/e $ self[v]=e
| Expr/e
;
Var/$self.get(v,0)$ -> VarId/v ;
Expr/e -> Term/e ( add_op/op Term/t $ e=op(e,t)
)*
;
Term/t -> Fact/t ( mul_op/op Fact/f $ t=op(t,f)
)*
;
Fact/f ->
add_op/op Fact/f $ f=op(0,f)
| Pow/f
;
Pow/f -> Atom/f ( pow_op/op Fact/e $ f=op(f,e)
)?
;
Atom/a ->
real/a
| integer/a
| Function/a
| Var/a
| '\(' Expr/a '\)'
;
Function/y ->
funct1/f '\(' Expr/x '\)' $ y = f(x)
| funct2/f '\(' Expr/x1 ',' Expr/x2 '\)' $ y = f(x1,x2)
;
"""
def mem(self):
vars = sorted(self.items())
= [ "%s = %s"%(var, val) for (var, val) in vars ]
memory return "\n\t" + "\n\t".join(memory)
print("Calc (TPG example)")
= Calc()
calc while 1:
= raw_input("\n:")
l if l:
try:
print(calc(l))
except Exception:
print(tpg.exc())
else:
break
```

The documentation is available online in PDF format.

You can read the ChangeLog for further version details.

Extract TPG-3.2.3.tar.gzand run `python setup.py install`

.

The Windows installer is not available anymore because of a virus infection. I will now only distribute source packages.

- TPG
- Toy Parser Generator (itself ;-)
- Tentakel
- distributed command execution
- Xoot
- shorthand to XSLT
- EZgnupy
- a front end for gnuplot written in python
- Ize
- python module providing a set of function decorators
- Rugg
- Flexible file system and hard drive crash testing
- Osh
- An Open-Source Python-Based Object-Oriented Shell
- pdfminify
- pdfminify re-compresses PDF images

- Python
- Parser-SIG
- See also Simple Parser