ChatGPT was born in San Francisco. Why isn’t the city fully embracing AI yet?

Chase DiFeliciantonio

July 14, 2023Updated: July 14, 2023 4:57 p.m. (SFChronicle.com)

A bicyclist rides along Market Street past the building where the SFMTA headquarters is located on Van Ness Avenue in San Francisco. The transit agency says it is looking into using artificial intelligence in some capacities.
A bicyclist rides along Market Street past the building where the SFMTA headquarters is located on Van Ness Avenue in San Francisco. The transit agency says it is looking into using artificial intelligence in some capacities.Benjamin Fanjoy/Special to The Chronicle
City Hall reflects from a bus window on McAllister Street in San Francisco on Thursday.

Earlier this year, New York City Public Schools blocked access to ChatGPT, the chatbot built by OpenAI and launched this past November that ignited the current wave of innovation in machine-learning artificial intelligence tools. Then, in May, the department changed course, embracing the chatbot and even using it to create lesson plans and to grade papers.

It’s an example of a city-level agency adjusting policies in real time when it encounters a new, transformative technology. But, in San Francisco, where ChatGPT was born, many city officials across separate departments say there are no policies in place about how employees and contractors should use machine-learning technology, or other kinds of artificial intelligence, when providing city services.

Neither the city’s Department of Technology nor Committee on Information Technology, whose job it is to make “decisions regarding the future of San Francisco’s technology,” responded to emails.

When The Chronicle asked how various city departments might use artificial intelligence, many of those queries were redirected to the city’s technology department, the mayor’s office or the office of the city administrator. Follow-up queries on individual department policies on AI made it clear the mayor’s office and the city administrator are working on policies but nothing is yet in place. 

“The Mayor has asked the City Administrator’s Office to undertake the lead in developing guidelines for AI, particularly generative AI, so that we can best incorporate this new technology in how we serve the public,” Jeff Cretan, spokesperson for Mayor London Breed, said in an email.

Even the San Francisco Unified School District, which has lessons that focus on digital ethics and agency, doesn’t have a policy on how AI should or should not be used and hasn’t weighed in on the use of tools like ChatGPT, spokeswoman Laura Dudnick wrote in an email.

“The district is aware of this technology and will continue to monitor the impact of AI, as we do with any changes that may affect the education space,” Dudnick said. “We are currently not blocking this technology use nor have we created or changed policy as a result.”

More for you

In a recent article in Wired, Beth Noveck, a professor at Northeastern University’s Institute for Experiential AI, pointed out that Boston has actually encouraged its employees to use generative AI like ChatGPT to potentially improve their work output — including using it to draft emails and memos more quickly or to rapidly translate complex government language into more digestible text or other languages.

“At the very least it’s like a really good word processor,” Noveck, who previously served as the chief innovation officer for the state of New Jersey, told The Chronicle.

She added that generative AI programs could be used by city workers to respond to a citizen looking for services in their native language more easily, or more quickly respond to a job applicant, comparing the current generation of chatbots to Microsoft Word’s “Clippy on steroids.”

Noveck acknowledged the risks of unleashing a potent technology with a penchant for hallucination into a public system, but said any city-level organization that doesn’t at least have guidelines in place for how the tech should or should not be used nine months after the release of ChatGPT is behind the curve.

“I think there are very real concerns,” about using AI algorithms, which are a long way from foolproof, for performing the hugely consequential decision-making of government, like determining prison sentences or health care eligibility, she said. “But we’re not going to figure out what the problems are unless we try it.”

Many San Francisco city departments already use services that incorporate artificial intelligence, Cretan said.

“For example the Airport has a parking assistance system that uses AI to assist the public in finding their vehicle,” Cretan said. “Other Departments use chatbots to help with customer service interactions, just as we know exists in the broader private market,” Cretan added, although one of those programs has been in place since May 2021. 

He acknowledged the technology presents opportunities that require guidelines the city does not have in place yet.

Asked about how the San Francisco Municipal Transportation Agency might use the technology, spokesman Stephen Chun said, “We have plans to look into automation to some degree to help business processes be more automated and efficient. We are not there yet.”

He also declined to make SFMTA director Jeffrey Tumlin available for an interview.

Chun pointed to a pilot program connected to SFMTA’s traffic flow software that “uses some AI to understand how long to leave a light green for a vehicle, (but) we still need to consider how powerful AI is and it needs to be carefully adopted and managed.”

It was not clear when those programs were implemented.

Chun did not respond to a follow-up question on whether city workers and contractors are allowed to use AI technology at work.

The only department reached by The Chronicle that could point to a specific policy on the topic of AI was the San Francisco City Attorney’s Office.

In an email, Jen Kwart, director of communications and media relations, highlighted language from city contracts with some tech vendors that prohibits outside companies from using city data to train machine-learning programs, but does not touch on how the technology should be used. 

Otherwise, Kwart said, “The City has not issued any broad guidance or policies on generative AI, to our knowledge.”

Reach Chase DiFeliciantonio: chase.difeliciantonio@sfchronicle.com; Twitter: @ChaseDiFelice

Written By Chase DiFeliciantonio

Chase DiFeliciantonio is a reporter at The San Francisco Chronicle on the Transformation team, where he covers tech culture, workplace safety and labor issues in San Francisco, Silicon Valley and beyond. Prior to joining The Chronicle, he covered immigration for the Daily Journal, a legal affairs newspaper, and a variety of beats at the North Bay Business Journal in Santa Rosa. Chase has degrees in journalism and history from Loyola University Chicago.VIEW COMMENTS

San Francisco Chronicle Homepage - Site Logo

HEARST newspapers logo©2023 Hearst Communications, Inc.

Tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *