|
Format: Multiple Choice
Duration: 90 Minutes
Exam Price: $
Number of Questions: 40
Passing Score: 65%
Validation: This exam has been validated against Oracle Cloud Infrastructure
2024
Policy: Cloud Recertification
Prepare to pass exam: 1Z0-1127-24
The Oracle Cloud Infrastructure 2024 Generative AI Professional certification is
designed for Software Developers, Machine Learning/AI Engineers, Gen AI
Professionals who have a basic understanding of Machine Learning and Deep
Learning concepts, familiarity with Python and OCI.
Individuals who earn this credential have a strong understanding of the Large
Language Model (LLM) architecture and are skilled at using OCI Generative AI
Services, such as RAG and LangChain, to build, trace, evaluate, and deploy LLM
applications.
Take recommended training
Complete one of the courses below to prepare for your exam (optional):
Become a OCI Generative AI Professional
Additional Preparation and Information
A combination of Oracle training and hands-on experience (attained via labs
and/or field experience), in the learning subscription, provides the best
preparation for passing the exam.
Review exam topics
Fundamentals of Large Language Models (LLMs) 20%
Using OCI Generative AI Service 45%
Building an LLM Application with OCI Generative AI Service 35%
Fundamentals of Large Language Models (LLMs)
Explain the fundamentals of LLMs
Understand LLM architectures
Design and use prompts for LLMs
Understand LLM fine-tuning
Understand the fundamentals of code models, multi-modal, and language agents
Using OCI Generative AI Service
Explain the fundamentals of OCI Generative AI service
Use pretrained foundational models for Generation, Summarization, and Embedding
Create dedicated AI clusters for fine-tuning and inference
Fine-tune base model with custom dataset
Create and use model endpoints for inference
Explore OCI Generative AI security architecture
Building an LLM Application with OCI Generative AI Service
Understand Retrieval Augmented Generation (RAG) concepts
Explain vector database concepts
Explain semantic search concepts
Build LangChain models, prompts, memory, and chains
Build an LLM application with RAG and LangChain
Trace and evaluate an LLM application
Deploy an LLM application
1Z0-1127-24 Brain Dumps Exam + Online / Offline and Android Testing Engine & 4500+ other exams included
$50 - $25 (you save $25)
Buy Now
QUESTION 1
In LangChain, which retriever search type is used to balance between relevancy
and diversity?
A. top k
B. mmr
C. similarity_score_threshold
D. similarity
Answer: D
QUESTION 2
What does a dedicated RDMA cluster network do during model fine-tuning and
inference?
A. It leads to higher latency in model inference.
B. It enables the deployment of multiple fine-tuned models.
C. It limits the number of fine-tuned model deployable on the same GPU cluster.
D. It increases G PU memory requirements for model deployment.
Answer: B
QUESTION 3
Which role docs a "model end point" serve in the inference workflow of the
OCI Generative AI service?
A. Hosts the training data for fine-tuning custom model
B. Evaluates the performance metrics of the custom model
C. Serves as a designated point for user requests and model responses
D. Updates the weights of the base model during the fine-tuning process
Answer: A
QUESTION 4
Which is a distinguishing feature of "Parameter-Efficient Fine-tuning (PEFT)"
as opposed to classic Tine- tuning" in Large Language Model training?
A. PEFT involves only a few or new parameters and uses labeled, task-specific
data.
B. PEFT modifies all parameters and uses unlabeled, task-agnostic data.
C. PEFT does not modify any parameters but uses soft prompting with unlabeled
data. PEFT modifies
D. PEFT parameters and b typically used when no training data exists.
Answer: A
QUESTION 5
How does the Retrieval-Augmented Generation (RAG) Token technique differ
from RAG Sequence when generating a model's response?
A. Unlike RAG Sequence, RAG Token generates the entire response at once without
considering individual parts.
B. RAG Token does not use document retrieval but generates responses based on
pre-existing knowledge only.
C. RAG Token retrieves documents oar/at the beginning of the response generation
and uses those for the entire content
D. RAG Token retrieves relevant documents for each part of the response and
constructs the answer incrementally.
Answer: C
QUESTION 6
Which component of Retrieval-Augmented Generation (RAG) evaluates and
prioritizes the information retrieved by the retrieval system?
A. Retriever
B. Encoder-decoder
C. Ranker
D. Generator
Answer: C
Certainly, here's a rewritten version of your text:
Packiam Vijendran 1 months ago - Malaysia
Passed the exam yesterday, 95% of the question were from this site. Note: Pay
more attention to all the community discussions on each question, instead of the
answers provided by the examtopics and I strongly suggest to get the contributor
access.
upvoted 4 times
Javier Cardaba Enjuto 2 months, 1 week ago - Spain
Excellent pre-exam session tool
upvoted 2 times
Palanisamy Arulmohan 1 months, 1 week ago - USA
I passed today, 94 questions asked and 99% of them were in this dump.
3 labs: BGP (as-override), HSRP, OSPF (without network statement)
upvoted 4 times
peppinauz 3 months, 2 weeks ago
I pass my exam, dump is valid about 90-95%. review the community answers!!
upvoted 6 times
Oberoi Ankit3 months, 3 weeks ago - USA Texas
Passed exam today dump still accurate. almost all the questions are here, some
are overcomplicated or incomplete on the site,
upvoted 4 times