Markdown Files#

Whether you write your book’s content in Jupyter Notebooks (.ipynb) or in regular markdown files (.md), you’ll write in the same flavor of markdown called MyST Markdown. This is a simple file to help you get started and show off some syntax.

What is MyST?#

MyST stands for “Markedly Structured Text”. It is a slight variation on a flavor of markdown called “CommonMark” markdown, with small syntax extensions to allow you to write roles and directives in the Sphinx ecosystem.

For more about MyST, see the MyST Markdown Overview.

Sample Roles and Directives#

Roles and directives are two of the most powerful tools in Jupyter Book. They are kind of like functions, but written in a markup language. They both serve a similar purpose, but roles are written in one line, whereas directives span many lines. They both accept different kinds of inputs, and what they do with those inputs depends on the specific role or directive that is being called.

Here is a “note” directive:

Note

Here is a note

It will be rendered in a special box when you build your book.

Here is an inline directive to refer to a document: Notebooks with MyST Markdown.

Citations#

You can also cite references that are stored in a bibtex file. For example, the following syntax: {cite}`holdgraf_evidence_2014` will render like this: [Holdgraf et al., 2014].

Moreover, you can insert a bibliography into your page with this syntax: The {bibliography} directive must be used for all the {cite} roles to render properly. For example, if the references for your book are stored in references.bib, then the bibliography is inserted with:

Ack20

L.M. Ackermann. Processing shakespeare. Blogpost., 2020. URL: https://lmackerman.com/AdventuresInR/docs/shakespeare.nb.html#n-grams.

Ala18

Jay Alammar. The illustrated transformer. Blogpost., 2018. URL: https://jalammar.github.io/illustrated-transformer/.

Ala20

Jay Alammar. How gpt3 works - visualizations and animations. Blogpost., 2020. URL: https://jalammar.github.io/how-gpt3-works-visualizations-animations/.

Dea09

John Deacon. Model-view-controller (mvc) architecture. Online][Citado em: 10 de março de 2006.] http://www. jdl. co. uk/briefings/MVC. pdf, 2009.

Dev16

Utopia Developers. Writing unit tests. Documentation for Utopia, 2016. URL: https://docs.utopia-project.org/html/usage/implement/unit-tests.html#.

GHJV94

Erich Gamma, Richard Helm, Ralph Johnson, and John M. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley Professional, 1 edition, 1994. ISBN 0201633612. URL: http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612/ref=ntt_at_ep_dpi_1.

Gra17

H. Graca. MVC and its alternatives. Blogpost on https://herbertograca.com/2017/08/17/mvc-and-its-variants/, 2017. URL: https://herbertograca.com/2017/08/17/mvc-and-its-variants/ (visited on 2023).

HdHPK14

Christopher Ramsay Holdgraf, Wendy de Heer, Brian N. Pasley, and Robert T. Knight. Evidence for Predictive Coding in Human Auditory Cortex. In International Conference on Cognitive Neuroscience. Brisbane, Australia, Australia, 2014. Frontiers in Neuroscience.

JM09

Daniel Jurafsky and James H. Martin. N-grams. In Speech and Language Processing, chapter 4. Prentice-Hall, Englewood Cliffs, NJ, 2009.

LSD12

Sheydi Anel Zamudio Lopez, Rene Santaolaya Salgado, and Olivia Graciela Fragoso Diaz. Restructuring object-oriented frameworks to model-view-adapter architecture. IEEE Latin America Transactions, 10(4):2010–2016, 2012.

Meh20

Arshad Mehmood. Generate unigrams-bigrams-trigrams-ngrams etc. in python. Blogpost., 2020. URL: https://arshadmehmood.com/development/generate-unigrams-bigrams-trigrams-ngrams-etc-in-python/.

Ols15

Tobias Olsson. Evolution and evaluation of the model-view-controller architecture in games. In Proceedings of ICSE, 4th International Workshop on Games and Software EngineeringAt: Florence, Italy. 05 2015. doi:10.1109/GAS.2015.10.

Ope22

OpenAI. Introducing chatgpt. OpenAI blog post, 2022. URL: https://openai.com/blog/chatgpt.

OWJ+22

Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, and others. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022.

PV19

Giulio Ermanno Pibiri and Rossano Venturini. Handling massive n-gram datasets efficiently. ACM Trans. Inf. Syst., feb 2019. URL: https://doi.org/10.1145/3302913, doi:10.1145/3302913.

Str19

Michael Struwig. What is a skipgram? Blogpost., 2019. URL: https://www.notsobigdatablog.com/2019/01/02/what-is-a-skipgram/.

TGZ+23

Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: an instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.

TLI+23

Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: open and efficient foundation language models. 2023. arXiv:2302.13971.

VSP+17

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, 5998–6008. 2017. URL: http://arxiv.org/abs/1706.03762.

Wei66

Joseph Weizenbaum. Eliza–a computer program for the study of natural language communication between man and machine. Communications of the ACM ACM, 9(1):36–45, 1966. doi:10.1145/365153.365168.

Learn more#

This is just a simple starter to get you started. You can learn a lot more at jupyterbook.org.