Most theoretical studies of inductive inference model a situation involving a machine M learning its environment E on following lines. M, placed in E, receives data about E, and simultaneously conjectures a sequence of hypotheses. M is said to learn E just in case the sequence of hypotheses conjectured by M stabilizes to a final hypothesis which correctly represents E. The above model makes the idealized assumption that the data about E that M receives is from a single and accurate source. An argument is made in favor of a more realistic learning model which accounts for data emanating from multiple sources, some or all of which may be inaccurate. Motivated by this argument, the present paper introduces and theoretically analyzes a number of inference criteria in which a machine is fed data from multiple sources, some of which could be infected with inaccuracies. The main parameters of the investigation are the number of data sources, the number of faulty data sources, and the kind of inaccuracies.