understanding of multiple tasks before tuning. C. Increase the batch size and reduce the learning rate simultaneously to speed up the tuning process and minimize training iterations. D. Focus the tuning on adjusting only the model's last few layers, which are responsible for task- specific outputs, while leaving the majority of the model unchanged. You are designing an AI application that must handle multiple language tasks, such as translation, summarization, and text classification. During testing, you find that for certain specialized tasks, the model performs poorly without examples. Which of the following statements best explains the differences in generalization between zero -shot and few -shot prompting, and how you might improve the model's performance? (Select two)
You are reviewing the results of a prompt -tuning experiment where the goal was to improve an LLM's ability to summarize technical documentation. Upon inspecting the experiment results, you notice that the model has a high recall but relatively low precision. What does this likely indicate about the model’s performance, and how should you approach further tuning?
You are implementing techniques to ensure that an IBM Watsonx Generative AI model does not expose any personal or sensitive information (PII) in its outputs. What is the most effective technique for excluding personal information during the inference stage of the generative AI process?
B. Verifying model accuracy before deployment. C. Registering the model in Watson Machine Learning (WML) D. Defining a scoring endpoint for the model in Watson Machine Learning IBM Watsonx's Prompt Lab offers various options to refine prompts for generating more effective AI outputs. Which of the following is an accurate description of an editing option available in Prompt Lab?
misclassifications. C. Create highly specific prompts for each possible issue, fine-tune the model on each prompt, and prioritize correctness over speed. D. Use a single tuned prompt for each product category, apply top-p sampling, and rely on post- processing to correct any misclassifications. In the context of generative AI, you are tasked with optimizing a model’s performance for a variety of use cases by tuning the prompts. One of your colleagues mentions using a "soft prompt" to improve the model's adaptability. What best describes the difference between a hard prompt and a soft prompt?
➢ TOTAL QUESTIONS: 379 In the context of IBM Watsonx and generative AI models, you are tasked with designing a model that needs to classify customer support tickets into different categories. You decide to experiment with both zero-shot and few -shot prompting techniques. Which of the following best explains the key difference between zero -shot and few -shot prompting?