💞 #Gate Square Qixi Celebration# 💞
Couples showcase love / Singles celebrate self-love — gifts for everyone this Qixi!
📅 Event Period
August 26 — August 31, 2025
✨ How to Participate
Romantic Teams 💑
Form a “Heartbeat Squad” with one friend and submit the registration form 👉 https://www.gate.com/questionnaire/7012
Post original content on Gate Square (images, videos, hand-drawn art, digital creations, or copywriting) featuring Qixi romance + Gate elements. Include the hashtag #GateSquareQixiCelebration#
The top 5 squads with the highest total posts will win a Valentine's Day Gift Box + $1
Great news, the GPT-4 API is fully open for use!
Source: CSDN
Arrangement: Tu Min
OpenAI, which just launched GPT-4 in March this year, only invited some developers who submitted applications to participate in the test. Seeing Greg Brockman, the co-founder of OpenAI, demonstrate the "most powerful" GPT-4 model on the spot at the time, easily generate a website through a hand-drawn sketch, and can develop a small game in 60 seconds. But it cannot be used.
And just today, the applicability of GPT-4 has been further expanded. OpenAI officially released the GPT-4 API, which is now fully open to all paid API developers!
OpenAI Roadmap: Expand GPT-4 to new developers by the end of this month
In the four months since it went online, I believe that many people have analyzed papers through technology (not too unfamiliar with GPT-4.
GPT-4 differs from its predecessor, GPT-3.5, in its enhanced ability to generate text, including code, while also accepting image and text inputs.
The model performs "human-level" on various professional and academic benchmarks. Previously, OpenAI researchers also tested it, saying that if GPT-4 was a person judged by test-taking ability alone, it could get into law school, and likely many universities as well.
However, it is a bit of a pity that the **image understanding capability is not yet available to all OpenAI users. ** Still, as OpenAI announced in March this year, it is only testing with its partner Be My Eyes. So far, OpenAI has not indicated when it will open up its image understanding capabilities to a wider user base.
Today, all API developers with a successful payment record can** access GPT-4 APIs with 8K contexts, of course, this time they cannot access 32K contexts. **
At the same time, **OpenAI also plans to open access to new developers at the end of this month, and then start increasing the rate limit according to the availability of calculations. **
It’s worth noting that while GPT-4 represents an important milestone in the development of generative AI models, that doesn’t mean it’s 100% perfect. As far as GPT-4 is concerned, it is also possible to "hallucinate" and make some reasoning errors with confidence.
In the future, OpenAI says it is also working to safely enable fine-tuning for GPT-4 and GPT-3.5 Turbo, which is expected to be available later this year.
Chat Completions API
In the announcement, OpenAI also announced that the GPT-3.5 Turbo, DALL-E and Whisper APIs will be generally available to developers.
At the same time, it also shared the current status of the widely used Chat Completions API with developers. OpenAI says the Chat Completions API now accounts for 97% of its API GPT usage.
OpenAI notes that the original Completions API was launched in June 2020, providing a free-form text prompt for language models to interact with.
The Chat Completions API's structured interface (e.g., system messages, function calls) and multi-turn dialog capabilities enable developers to build conversational experiences and extensive completion tasks while reducing the risk of hint injection attacks because user-provided content can be extracted from the structure separate from the command.
Deprecation of old models
In addition, OpenAI also released the deprecation plan of the old model. That is, starting from January 4, 2024, some old OpenAI models, especially GPT-3 and its derivative models, will no longer be available and will be replaced by the new "GPT-3 base" model, the new model Computational efficiency will be higher.
According to the announcement, the specific obsolete models include some old models in the Completions API, including the familiar davinci:
*Apps using stable model names based on GPT-3 models (ada, babbage, curie, davinci) will be automatically upgraded to the new models mentioned above on January 4, 2024. In the coming weeks, new models will also be accessible for early testing by specifying the following model names in API calls: ada-002, babbage-002, curie-002, davinci-002.
As OpenAI turns on fine-tuning later this year, they will prioritize GPT-3.5 Turbo and GPT-4 fine-tuning for users who have previously fine-tuned older models. The specific reason is that OpenAI understands that migrating from a model that has been fine-tuned on its own data is challenging, and they will provide support for "users of previously fine-tuned models to make this transition as smooth as possible."
In addition to retiring some old Completions API models, OpenAI said that users of old embedding models such as text-search-davinci-doc-001 will also need to migrate to text-embedding-ada-002 by January 4, 2024.
OpenAI said it will contact affected users in the coming weeks and they will provide more information once the new model is ready for early testing.
Preview: Next week, all ChatGPT Plus users will be able to use the code interpreter
The most anticipated thing is that OpenAI officials also announced on Twitter that the code interpreter will be open to all ChatGPT Plus users next week.
It allows ChatGPT to run code and optionally access user-uploaded files. Developers can directly ask ChatGPT to analyze data, create graphs, edit files, perform mathematical operations, and more.
10 things you can do with GPT-4 API
Finally, with the release of the GPT-4 API this time, developers no longer have to bother to purchase Plus services. Calling the most powerful GPT-4 API in the industry so far will undoubtedly make related applications more intelligent.
So, what can we do with the GPT-4 API, and foreign media have summarized 10 common usages:
reference: