Text Simplification consists of rewriting sentences to make them easier to read and understand, while preserving as much as possible of their original meaning. Simplified texts can benefit non-native speakers, people with low-literacy levels, and those suffering from aphasia, dyslexia or autism. Human editors simplify by performing several text transformations, such as replacing complex terms by simpler synonyms, reordering words or phrases, removing non-essential information, and splitting long sentences. However, current end-to-end models for the task leverage datasets that do not necessarily contain training instances that exhibit this variety of operations. As such, it is unclear whether this implicit learning of multi-operation simplifications results in automatic outputs with such characteristics. In this talk, I will present recent work that attempts to tackle this problem from two fronts: (1) ASSET, a new dataset for tuning and testing with multi-operation reference simplifications, and (2) a model that combines linguistically-motivated rules and neural sequence-to-sequence models to produce varied rewriting styles when simplifying.