Key Takeaways from Big Data Analytics Workshop
  • Brandon Ashby

Key Takeaways from Big Data Analytics Workshop

I attended UNF’s International School of Business’ workshop on The Impact of Big Data Analytics in Corporations. The workshop is one of four meetings to discuss the opportunities and challenges business in today’s market. The analytics panel of experts ranged from Academics, Global Supply Chain, and Healthcare. All the speakers provided their insight and practical use of big data, while fielding questions from the audience as well.


What is big data anyways?


The 5V’s is a popular acronym for big data (Volume, Variety, Veracity, Velocity, & Value).

Five Levels of Big Data


Big data starts with a variety of sources like e-mail, data logs or GPS location. Did you know there are over 50 data sources like ERP transactional data and GPS Telematics? The sheer volume is intimidating for organizations to turn the data into useful information. The velocity and veracity (quality) are the next factors contributing to big data. With technology rapidly increasing, the pace that users and systems receive information is in real time. Data veracity is critical for any organization and project to successful turn the data into value able information. As an analyst, I remember cleansing data was always the hardest step. Are you leveraging big data in your workplace? How many sources do you pull data from?


What do we do with Big Data?


The initial response is to make data-driven decisions with the information, but that’s always easier said than done. The attendees had a lively discussion about data vs. experience. Business leaders tend to rely on their experience, while analysts prove their points with statistics. Statistical tools like linear regression or analysis of variance tend to create more confusion than clarity. The panel recommended crafting stories around the data or connecting to business metrics. In my experience as a Lean Six Sigma Black Belt, every project included a story board that started with a financial metric. Several metrics exist, but a recommendation was to tie the results to revenue or cost. How does your team share data and information?


How to leverage big data?


One panelist shared the standard process for data analyst, CRISP-DM (Cross-Industry Process for Data Mining). Have you noticed there’s an acronym for everything? There are so many acronyms between different departments and companies, but learning the underlying concepts is the important. Like other problem-solving methods, CRISP-DM starts with understanding the business case and available data. Do really solve a problem with data, the team must clearly define the problem first. In Six Sigma projects, documenting the business case and data is referred to as the voice of the customer and business. After comprehending the business case and data, data preparation is the next step. The modeling phase consist of selecting a tool and building a model. After the model is assessed, the tool is evaluated with real data. The final phase is deployment to the business. Several audience members were interested in learning more interested in improving their big data skills. The panelist gave several tips and resources to brush up on data modeling. MIT OpenCourse Ware offers an Introduction to Computational Thinking and Data Science course, and there are over 6,000 Data Scientist Meet Up Groups to collaborate with others. The last discussion point decided on what software analyst should use between Python, R, and SAAS. The consensus was Python due to ease of use and available of resources. What software are you currently using?


If you need assistance or would like to discuss process improvement for your organization, please e-mail me to start a conversation about opportunities and challenges.

Contact:

  • LinkedIn
  • Facebook
  • Twitter
  • YouTube