# Basic arithmetic in Latex

More and more, I use Latex as scratch paper, sometimes to develop numerical (counter-)examples. I often find it much cleaner and better organized than if I rely on actual pen an paper.

Numerical examples are usually a “back and forth” kind of business, with many moving parts, and you trying to find the numbers that “make it work here” without “changing things there”.

So finding the right combination of numbers often requires a lot of simple (but error-prone) calculations. In that matter too, Latex can help save time and prevent mistakes.

With pen and paper, you would usually (1) write down your arithmetic (e.g., (1/3)*57 + (1/4)*23 + (5/12)*20), (2)  use a calculator or spreadsheet to perform the calculation, and (3) write down the result on your sheet of paper.

There are many ways to make mistakes in that process.

Latex allows you to reduce the chances of mistakes, and save you some time going back and forth between pen, paper, and calculator. Have a look at https://tex.stackexchange.com/questions/30081/how-can-i-sum-two-values-and-store-the-result-in-other-variable. Very much like you would if you were using Markdown, Latex lets you (with the help of a particular package),  (1) write down your arithmetic and (2) simply ask LaTex to compute the result.

The code looks something like this:

\documentclass{article}
\usepackage[nomessages]{fp}% http://ctan.org/pkg/fp
\begin{document}

\FPeval{\result}{clip(
(1/3)*57 + (1/4)*23 + (5/12)*20
)}

$(1/3)*57 + (1/4)*23 + (5/12)*20 = \result$

\end{document}

As illustrated in https://tex.stackexchange.com/questions/30081/how-can-i-sum-two-values-and-store-the-result-in-other-variable, you can easily round decimals to your liking:

\documentclass{article}
\usepackage[nomessages]{fp}% http://ctan.org/pkg/fp
\begin{document}

\FPeval{\result}{round(
100/3,
1
)}

$100/3 = \result$

\end{document}

# p-hacking (and other data manipulations) made mainstream?

I am not a big fan of the last part of the talk that is too optimistic about randomized control trials and may make then look like some sort of panacea. RCTs have important limitations, especially when it comes to external validity. If you run enough of them, you are bound to find the kind of patterns Laura Arnold seems to criticizes using the 15 ingredients study (7:20 in the video). What works here now might not work there tomorrow. Eventually, you need structural models and sound theory to understand when a treatment works, and when it does not.

That’s the curse of the TED format. Arnold is somewhat critical of the format, but she has to fit in it. I am sure she and the organizations she advocates for know of the limits of RCTs. But there’s only so much you can say in 18 minutes if you want to conclude on an uplifting note.

Still, the overall message is very much worth spreading. It’s important to popularize notions like p-hacking and file drawer effect that rarely even make it into introductory statistics class. And it’s nice to see TED being a little critical of itself (or rather TEDx being critical of TED).

# “Betting and belief: prediction markets and attribution of climate change” on Andrew Gelman’s blog

One slightly unpleasant feature of LanguageTool with Texstudio is that new words are a little harder to add to the dictionary than when using Texstudio’s native spellcheck.

The good part of having to add words through LanguageTool is that words you add are, well, actually added to the dictionary, whereas adding words to Texstudio’s native dictionary is — in my experience — unstable (I’ve had to add the same words multiple time in many occasions, in particular after updates).

For explanation on how to add words to LanguageTool’s spell check, once again see the very good documentation on LanguageTool’s website at http://wiki.languagetool.org/hunspell-support#toc0.

I will try to keep an updated list of words I added here. The list might be of some use in particular to those working in a field related to microeconomics theory.

# Grammar nightmares: the road to salvation with LanguageTool

I am terrible with grammar, as you will likely observe somewhere in this post or elsewhere on my website. This is often very embarrassing. These days however, I should be able to spare myself the embarrassment given the plethora of language checking softwares.

There are two main reasons these softwares do not do the job for me:

1.  I do most of my writing in LaTex with Texstudio, and Texstudio only comes with a rudimentary spell checker with little grammar checking abilities (it’s a pain to copy paste in Word, mostly because Word’s checker gets caught into LaTex syntax).
2. I am so bad that even state-of-the-art language checker do not catch most of my mistakes. For instance, I am very bad with homophones. I often get words like “to” and “too” mixed up when I write, which even Word’s checker misses most of the time.

Regarding 2., what I really need is a language checker in which I can set up my own rules. When I realize I’ve made a mistake, I know I am likely to make that mistake again. Thus it is just a matter of making the effort to write down a rule that will catch that mistake for me in the future.

I’ve wanted to do just that for a while, but never found the right tool. My salvation might come from LanguageTool.

1. LanguageTool works with Texstudio (see https://www.youtube.com/watch?v=VYIY7bbSv4Q for a simple installation tutorial) and natively improves upon the default language checker in Texstudio.
2. LanguageTool gives you the ability to add your personal rules using a relatively straightforward syntax (there is a learning curve, but it’s not too bad).

For instance, I can easily tell LanguageTool to look for instances of “It is not to bad” (which Word’s checker does not flag) and suggest to replace it by “It is not too bad”.

Rules in LanguageTool are quite versatile and allows for regular expression via the regex syntax.

LanguageTool’s tutorial explains how to create and add rules very didactically at http://wiki.languagetool.org/development-overview#toc4. Because the previous link has broken in the past, here is a direct quote describing the basics:

“Most rules are contained in rules/xx/grammar.xml, whereas xx is a language code like en or de. In the source code, this folder will be found under languagetool-language-modules/xx/src/main/resources/org/languagetool/; the standalone GUI version contains them under org/languagetool/.

A rule is basically a pattern which shows an error message to the user if the pattern matches. A pattern can address words or part-of-speech tags. Here are some examples of patterns that can be used in that file:

• <token>think</token>
matches the word think