This week we discuss GPT-2, a new transformer-based language model from OpenAI that has everyone talking. It’s capable of generating incredibly realistic text, and the AI community has lots of concerns about potential malicious applications. We help you understand GPT-2 and we discuss ethical concerns, responsible release of AI research, and resources that we have found useful in learning about language models.Sponsors:Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelogRollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com. Featuring:Chris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:Relevant learning resources:Jay Alammar “Illustrated” blog articles: The illustrated transformerThe illustrated BERT, elmo, and coMachine Learning Explained blog: An In-Depth Tutorial to AllenNLP (From Basics to ELMo and BERT)Paper Dissected: “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” ExplainedReferences/notes:GPT-2 blog post from OpenAIGPT-2 PaperGPT-2 GitHub RepoGPT-2 PyTorch implementationEpisode 22 of Practical AI about BERTOpenAI’s GPT-2: the model, the hype, and the controversy (towardsdatascience)The AI Text Generator That’s Too Dangerous to Make Public (Wired)Transformer paperPreparing for malicious uses of AI (OpenAI blog)Upcoming Events: Register for upcoming webinars here!