AIRLINK 75.25 Decreased By ▼ -0.18 (-0.24%)
BOP 5.11 Increased By ▲ 0.04 (0.79%)
CNERGY 4.60 Decreased By ▼ -0.15 (-3.16%)
DFML 32.53 Increased By ▲ 2.43 (8.07%)
DGKC 90.35 Decreased By ▼ -0.13 (-0.14%)
FCCL 22.98 Increased By ▲ 0.08 (0.35%)
FFBL 33.57 Increased By ▲ 0.62 (1.88%)
FFL 10.04 Decreased By ▼ -0.01 (-0.1%)
GGL 11.05 Decreased By ▼ -0.29 (-2.56%)
HBL 114.90 Increased By ▲ 1.41 (1.24%)
HUBC 137.34 Increased By ▲ 0.83 (0.61%)
HUMNL 9.53 Decreased By ▼ -0.37 (-3.74%)
KEL 4.66 No Change ▼ 0.00 (0%)
KOSM 4.70 Increased By ▲ 0.01 (0.21%)
MLCF 40.54 Decreased By ▼ -0.56 (-1.36%)
OGDC 139.75 Increased By ▲ 4.95 (3.67%)
PAEL 27.65 Increased By ▲ 0.04 (0.14%)
PIAA 24.40 Decreased By ▼ -1.07 (-4.2%)
PIBTL 6.92 No Change ▼ 0.00 (0%)
PPL 125.30 Increased By ▲ 0.85 (0.68%)
PRL 27.55 Increased By ▲ 0.15 (0.55%)
PTC 14.15 Decreased By ▼ -0.35 (-2.41%)
SEARL 61.85 Increased By ▲ 1.65 (2.74%)
SNGP 72.98 Increased By ▲ 2.43 (3.44%)
SSGC 10.59 Increased By ▲ 0.03 (0.28%)
TELE 8.78 Decreased By ▼ -0.11 (-1.24%)
TPLP 11.73 Decreased By ▼ -0.05 (-0.42%)
TRG 66.60 Decreased By ▼ -1.06 (-1.57%)
UNITY 25.15 Decreased By ▼ -0.02 (-0.08%)
WTL 1.44 Decreased By ▼ -0.04 (-2.7%)
BR100 7,806 Increased By 81.8 (1.06%)
BR30 25,828 Increased By 227.1 (0.89%)
KSE100 74,531 Increased By 732.1 (0.99%)
KSE30 23,954 Increased By 330.7 (1.4%)

NEW YORK: ChatGPT-maker OpenAI published Monday its newest guidelines for gauging “catastrophic risks” from artificial intelligence in models currently being developed.

The announcement comes one month after the company’s board fired CEO Sam Altman, only to hire him back a few days later when staff and investors rebelled.

According to US media, board members had criticized Altman for favoring the accelerated development of OpenAI, even if it meant sidestepping certain questions about its tech’s possible risks.

In a “Preparedness Framework” published on Monday, the company states: “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.”

Sam Altman, ousted pioneer of OpenAI, is serial entrepreneur

The framework, it reads, should “help address this gap.”

A monitoring and evaluations team announced in October will focus on “frontier models” currently being developed that have capabilities superior to the most advanced AI software.

The team will assess each new model and assign it a level of risk, from “low” to “critical,” in four main categories.

Only models with a risk score of “medium” or below can be deployed, according to the framework.

The first category concerns cybersecurity and the model’s ability to carry out large-scale cyberattacks.

The second will measure the software’s propensity to help create a chemical mixture, an organism (such as a virus) or a nuclear weapon, all of which could be harmful to humans.

The third category concerns the persuasive power of the model, such as the extent to which it can influence human behavior.

The last category of risk concerns the potential autonomy of the model, in particular whether it can escape the control of the programmers who created it.

Once the risks have been identified, they will be submitted to OpenAI’s Safety Advisory Group, a new body that will make recommendations to Altman or a person appointed by him.

The head of OpenAI will then decide on any changes to be made to a model to reduce the associated risks.

The board of directors will be kept informed and may overrule a management decision.

Comments

200 characters