We outline an incremental learning algorithm designed for nonstationary environments where the underlying data distribution changes over time. With each dataset drawn from a new environment, we generate a new classifier. Classifiers are combined through dynamically weighted majority voting, where voting weights are determined based on classifiers' age and accuracy on current and past environments. The most recent and relevant classifiers are weighted higher, allowing the algorithm to appropriately adapt to drifting concepts. This algorithm does not discard prior classifiers, allowing efficient learning of potentially cyclical environments. The algorithm learns incrementally, i.e., without access to previous data. Finally, the algorithm can use any supervised classifier as its base model, including those not normally capable of incremental learning. We present the algorithm and its performance using different base learners in different environments with varying types of drift.