About
Responsibilities
Directing the data gathering, data mining, and data processing in large volume; creating appropriate data models.
Exploring, promoting, and implementing semantic data capabilities through Natural Language Processing, text analysis and machine learning techniques.
Leading to define requirements and scope of data analyses; presenting and reporting possible business insights to management using data visualization technologies.
Evaluating and conducting research on data model optimization and algorithms to improve effectiveness and accuracy on data analyses.
Skills Required
Solid knowledge in applying Python (NumPy, SciPy, Pandas, etc.) programming to solve business challenges.
Working knowledge and experience in practical applications of Machine Learning techniques such as Clustering, Logistic Regression, Random Forests, SVM or Neural Networks.
Experience with AWS services, Cloud ELT/ETL, Snowflake, SQL.
Strong knowledge of end-to-end data lifecycle across traditional data warehouses, relational databases, operational data stores, business intelligence reporting, as well as new concepts such as data fabrics, data mesh, and data lake house.
Strong understanding of data governance, metadata management, data quality, data modeling, and data architecture concepts.
Experience utilizing quantitative analysis/ data management (data design, data quality, metadata, governance, etc.).
Knowledge of data technology products and components for Big Data and Cloud (AWS, Data Lakes, and similar)
Ability to clearly communicate complex technical ideas, regardless of the technical capacity of the audience.
Analytical Thinking: Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems.
Query and Database Access Tools: Knowledge of data management systems; ability to use, support and access facilities for searching, extracting and formatting data for further use.
Analytics Programming Language: Python, R, SQL and SAS
Data Engineering and Warehousing: Snowflake
ML Platform: PyTorch or Tensorflow
Insights Delivery: PowerBI; Rshiny / Dash / Streamlit
Cloud Platform: AWS (S3, EC2, LAMBDA, Glue, Sagemaker) or its equivalent in Azure
AI and Gen AI Orchestration: Ollama, AWS Bedrock / Sagemaker or Azure ML / Azure OpenAI
Education & Work Experience
Master's degree with 5 years of experience or a Bachelor's degree with 10 years of experience in Computer Science, Mathematics, Engineering, Accounting, Statistics, Data Science, Business Analytics, or a closely related field with extensive coursework in mathematical and statistical modeling.
Title
Data Scientist
Location
Chicago, IL
Client Industry
Industrial Digital
#J-18808-Ljbffr
Languages
- English
Notice for Users
This job comes from a TieTalent partner platform. Click "Apply Now" to submit your application directly on their site.