- IPRO credit/I-Course status pending approval as of April 10
- WORK in a team — diverse skills and majors build the whole
- LEARN from and MENTOR your peers
- APPLY data science, mathematics, statistics to real-world problems
- CREATE and PROMOTE a paper, poster, talk, GitHub repo, app, dashboard
- BUILD marketable skills for employment or graduate school
- Robert Ellis, Assoc. Prof. of Applied Math | ellisr@iit.edu | math.iit.edu/~rellis
- David Eads, Senior Editor of Design and Delivery, The Chicago Reporter
Topic 1: Demographic distribution and effects of the COVID-19 pandemic in Chicago
Topic 2: Polling location placement. Determine if poling places in the counties/precincts of a state are fairly located. Red/Blue team format: Blue team optimizes placement, Red team analyzes for weaknesses.
Topic 3: Facebook political advertising data sets. Explore patterns and targeting of Facebook political ads since May 2018. A Facebook and a ProPublica data set are available.
Format: Directed team research and problem-solving with a faculty adviser and subject matter expert; twice-weekly class meetings; plus individual and team work each week. Periodic deliverables will lead to a final paper/poster/talk, and IPRO Day participation.
Course description: Deep Neural Networks for science and engineering is a research course on designing and implementing solutions to challenging mathematical and statistical problems arising in modeling and prediction. Students shall form groups and solve a specific scientific or engineering problem, using deep neural networks.
Course Objectives: On completion of the course, the student should expect to demonstrate expertise in the following topics:
- Review of statistical learning theory
- Feedforward architectures
- Training and back-propagation
- Error analysis of feedforward architecture
- Surrogate modeling for PDEs, including applications in calibration and enforcing conservation laws
- Recurrent neural networks for modeling dynamical systems
- Convolution neural networks for spatio-temporal modeling
- Autoencoders for dimensionality reduction