A project under development with a semiconductor manufacturing
company can monitor the detection and classification of failures in real time.
With stream computing, failures in chips being manufactured are detected in
minutes rather than hours or even weeks. Defective wafers can be reprocessed
and, more importantly, adjustments can be made in the manufacturing processes
themselves, in production.
Additionally, we may think that cloud computing is also a
booster for Big Data because public clouds can be used to support huge amounts
of data. The cloud's elasticity characteristics allow us to power virtual
servers on demand only at the time of processing these data.
Anyway, Big Data is knocking on our doors. Its potential is
not yet fully recognized, but we already see clear signs of this importance to
solve diverse problems such as socioeconomic issues and even epidemic
prevention.
As for companies, Big Data opens up a new and still
unexplored territory. We lack knowledge, experience and even professional
expertise. It starts talking about new functions as data scientists, but it is
inevitable that the CIOs have to put Big Data on the screen of their radars.
The opportunities that the five "V" s bring can
not and should not be wasted.
The other day I wrote a post with a simple formula to
conceptualize Big Data. It is equal to volume + variety + velocity. But I
usually add two more V's to this equation: veracity and value. Let's detail
these topics a bit more.
Volume is clear. We generate petabytes of data every day. It
is estimated that this volume doubles every 18 months. Variety also, since
these data comes from structured (now minority) and unstructured systems (the
vast majority), generated by emails, social media (Facebook, Twitter, YouTube
and others), electronic documents, Powerpoint presentations, instant messages,
sensors , RFID tags, video cameras, etc.
Thank you for sharing.
ReplyDeleteVery helpful information.