USC’s Dr. Burcin Becerik-Gerber wants to make buildings smarter and more efficient, which could mean offices with distinct ‘personalities’ like Alexa or Siri.
Imagine having a place that welcomes you back from the daily grind. Not just with the ideal temperature setting, but a location that’s “alive” and understands the type of music you need to hear or images you want to see, based in part on a readout from your wearable device.
That’s what Dr. Burcin Becerik-Gerber, a Stephen Schrank Early Career Chair in Civil and Environmental Engineering at the University of Southern California, is building at USC’s iLab.
After studying architecture in her native Istanbul and completing an M.S. in Engineering at UC Berkeley and a doctorate in Project Management and Information Systems at Harvard, Dr. Becerik-Gerber joined USC in 2008 to research and create the future of interactive environments.
It’s a field where buildings are sentient; uniquely programmed to suit the needs of their residents, while being climate-aware and environmentally sustainable. PCMag spoke with Dr. Becerik-Gerber at her lab to learn more. Here are edited and condensed excerpts from our conversation.
Growing up in Istanbul, famous for both 19th Century Ottoman Empire palacesand modern 53-story skyscrapers to rival Shanghai, no wonder you were edified by edifices.
Actually, I became passionate about engineering and construction after the 7.6-magnitude earthquake which struck my city in 1999. I wanted to figure how to build safer and more efficient buildings, so applied to come to the US, to do my Masters in the Bay Area, because they had all these new and exciting technological advancements.
You’ve worked in industry and continue to do many real-world, grant-supported industrial collaborations. This isn’t just futuristic pontification about cool tech.
Yes. After my studies at Harvard, I wanted to take a break, for both personal and financial reasons, so joined an environmental engineering firm to head up their program management software development team. We were building dashboards and solutions on billion-dollar construction projects. I was leading the team and collaborating with colleagues from many disciplines: computer engineers, civil engineers, consultants, and project managers.
What made you go back into academia, which is surely a less lucrative field?
[Laughs] Well, I decided I’m more of a researcher because I wanted to really think about what might be the future of our industry, to bridge all these different disciplines, and bring in social science. I saw, from my experience in industry, that we build places for people, but we don’t take into account how they use them.
That’s when you got interested in human-centric design and operation of buildings?
Yes. How do you understand what people need…or desire? It’s not a static, but a dynamic relationship. That’s where the computing part comes into place. In my work today, we bring in environmental and building sensors as well as human sensors, but the data have many dimensions so we need new tools to fuse, measure, and analyze them.
For example, we can record humidity, carbon dioxide, people’s heart rate, but how do you correlate this data to thermal comfort? There are seasonal shifts, metabolic changes, different comfort responses depending on the task you’re engaging with. We take these personal, situational, and temporal differences to build adaptive and responsive environments.
To further the actual mechanics of how buildings become sentient, you’ve co-authored many papers. Can you talk us through a few?
[One paper] looked at how to combat the fact that more than half of the electricity in residential and commercial buildings is consumed by lighting systems and appliances, which are directly associated with occupant activities. So, by recognizing activities and identifying the associated possible energy savings, more effective strategies can be developed to design better buildings and automation systems. We introduced a framework to detect occupant activities and potential wasted energy consumption and peak-hour usage that could be shifted to non-peak hours in real time, creating three sub-algorithms for action detection, activity recognition, and waste estimation.
Your research identified significant savings via this algorithmic-based platform.
Yes, 35.5 percent of the consumption of an appliance or lighting system on average was identified as potential savings.
But getting people to change behavior, in order to take advantage of these findings, required more work?
Right. We needed to build a more human-centric approach. In [another] paper, for which I was granted an NSF award, we considered ways to enhance the interaction between buildings and occupants, establishing trust with building automation. The work drew on theories from the behavioral sciences to mathematically model when and how a building should interact with a user and how these interactions should be framed.
How do you do that?
As an example, we conducted one project where we used VR to [see] whether people would behave more responsibly within a building branded as “green.” We built a simulation and observed the behavior of two groups. One group was told they were in a LEED-certified building, the other group was not. We found that people behaved so much better, in terms of energy usage, recycling and so on, when they were told the building was already “green.”
That was inside VR, can you talk about the specific technologies that you use to create your physical platforms?
Here at the lab, and in other testbed buildings we retrofit, we employ sensors from National Instruments, machine learning, LabVIEW, Data Loggers, and signal-processing tools. All the algorithms are created by my PhD students using C++, Python, and other software packages. We take data outputs from mobile phones, build models in Unity, use Oculus for VR, HoloLens for AR, and develop our own wearables.
You build your own wearables?
We’ve built lots of them. As an example, on one project, we 3D-printed helmets for construction workers, embedded with sensors, to examine fatigue levels, specifically for those working in hot climates. We’ve also 3D-printed infrared fitted eye glasses, here in the lab for sensing and monitoring thermal comfort.
Can you share a couple of your industry collaboration case studies with us, too?
The IoT company Lyngsoe Systems asked us to create an RFID-based indoor location-sensing platform, so they could track the movement of equipment and materials as they move towards lights-out automated 24/7 manufacturing. We also equipped a Lidar lab for the construction company Kiewit to track displacements using radar scanning on highly retaining walls. We are currently collaborating with a global engineering firm, Arup, to investigate the use of machine learning in improving office workers’ comfort, productivity, and health.
If those examples are about buildings being able to communicate with people, can you go further and illustrate what it’s like giving them a personality?
The engineering school here at USC made a podcast, which imagined a future building with its own personality called “Kate.” This illustrated my ultimate goal of enabling cyber-physical systems to interact and collaborate with humans.
Perhaps not everyone wants a “Kate” voice, though.
Agreed. I really believe in complete personalization as cultural differences come into play on what kind of personality you’d most appreciate. We’ve done experiments where we’ve used virtual avatars, rather than voices, as in a building manager walking towards you.
Ultimately, I prefer the setup process to be as minimally intrusive as possible. The devices have to be more intelligent; it shouldn’t be a chore setting up these relationships, it should be frictionless. I see buildings as cognitive entities, similar to autonomous cars in a way. We spend 90 percent of our time indoors. Why can’t the building be your friend if you’re spending so much time in it? Essentially you’re inside the machine and everything becomes an interface. There are very beautiful buildings architecturally, but I want to make all buildings smarter, more sustainable, efficient, and resilient.
How far away is this scenario?
It’s happening now. It’s the right time for this field to flourish. We have the data, the machine-learning algorithms, society is now primed for personalization through our interactions with computing devices. Wearables are becoming ubiquitous. We can track everything. Plus the new generation expects everything to be personalized, why not buildings?
As a final question, what’s next for you?
I’m very excited to be spending the summer at the Alan Turing Institute, as a Rutherford Fellow, furthering my research on “disaster prepared buildings.” I’ll be working with several scientists on data-driven engineering design under uncertainty to find better solutions for the future of buildings and the humans that live/work in them.
By: S.C. Stuart (PC)
Click here to view source article.