We consider the problem of distributed collaboration among multiple agents to perform tasks in an ad-hoc setting. Because the setting is ad-hoc, the agents could be programmed by different people and could potentially have different task selection and task execution algorithms. We consider the problem of decision making by the agents within such an ad-hoc setting so that the overall utility of the agent society can be improved. In this paper we describe an ad-hoc collaboration framework where each agent strategically selects capabilities to learn from other agents which would help it to improve its expected future utility of performing tasks. Agents use a very flexible, blackboard-based architecture to coordinate operations with each other and model the dynamic nature of tasks and agents in the environment using two 'openness' parameters. Experimental results within the Repast agent simulator show that by using the appropriate learning strategy, the overall utility of the agents improves considerably.